Let's Get Personal
On AI Use Cases
February 14, 2026
One of my favorite conversation starters these days is to ask people if they have any creative ways they are using AI personally. Feel free to steal this for your own use, by the way. It can offer some interesting insights. I had one person, for example, explain how they take pictures of a plate of food and then have AI calculate their macros so they can more easily stick to a nutrition plan. Another friend uses AI to preview potential remodel changes before committing to any materials or effort. And given that I work at an AI company, one of my favorite use cases is from my manager who uses AI to create his meditations. This is on my to-do list.
So, pretending we’re having this conversation, let me answer that question myself -
I'm so glad you asked!
There are several applications I could choose from including using ElevenLabs to recreate a silly voice my dad does for a fun insert at his birthday celebration, creating an agent to guide neonate kitten (4 weeks and younger) care. This is something near and dear to my heart because I actively volunteer with my local animal shelter. Definitely worth a post of its own another day.
The one that sticks out though, is my reading list agent.
You’re probably thinking that doesn’t sound that impressive. But the time this agent saves me is significant. Allow me to give you some background on this.
I read.
A lot.
Reading is for me both a resource and an escape. If you haven’t already gotten the impression from my past posts that I gravitate towards qualified nerd, I’m going to give you an empirical data point to support that label. I track my reading year over year.
This is mostly so I can keep track of what I’ve read. My system of organization is really quite simple, when I finish a book, I mark it on my calendar.
The format looks like this: “Read: Book title” as an all day event.
Color coded, of course.
Most of my books come from the library, which aggregates my checkout history. But there are exceptions, impulse purchases, gifts - so my calendar becomes the most accurate tracking tool and simplest method to assemble my reading volume. I review “Read” calendar instances and count them up. Worth noting from a data point perspective that quantity of titles does not always match effort or time invested as we all know books vary in length and topic density. In the interest of having more time to read though, I’ve decided my metric tracking should not delve that deep.
Another layer of this is that because I love books, I have a tendency to give them the benefit of doubt longer than I should. It feels like a war in my head sometimes when I am struggling through a book I’m not enjoying but waiting for it to get better. Because it has to get better. Right? And when I’m wrong, more than 200 pages in or often finished book later, I’m annoyed at all the time I’ve invested in something that hasn’t paid off in enjoyment or enrichment of knowledge. This was exactly why I needed to build a better system.
To go into the considerations that were made in developing this, I wrapped my solution into an agent. This was done intentionally to support repeatability. I wanted a framework where I could equip personalization from a list of titles I’ve enjoyed as well as ones that haven’t fit my preference all wrapped into a nice little AI librarian I can call upon whenever considering a new title or if I was questioning a title I started and was feeling unsure about.
Because some of my books were newer, a necessary element of this agent required making sure the information it had access to included data freshness for recently published titles. For example, when testing across platforms for this agent, the ChatGPT version gave me two very creative versions of a book I was reading that weren’t at all close to the plot or even the same characters. Which though creative was not helpful.
AI is designed to make the user “happy” and as part of that, will always try to respond and satisfy a prompt. This is where hallucinations often happen. The reason ChatGPT wasn’t able to give me a more accurate response was because the book was written after the cutoff date for that particular model’s training. If you want to dig into this topic further, we touch on it in the Platforms and Prejudice post. You can find training including cutoff date information in LLM platform documentation as well as across a number of eval sources such as Vellum.
Going back to the agent I created and the technical elements involved, this example serves as a great illustration of why it is important to consider the tools that are needed in the framework of your AI solution. You want to make sure they will support and best equip the desired output. For my librarian agent, this meant I needed a search index that could look at web resources and new updates on recently published titles. Shoutout to the you.com platform which includes such a search index.
Another consideration I made when creating this agent was to make sure the instructions not only included my book preferences but also my response preferences. I didn’t want the agent to make the decision for me. Instead, I wanted the response to equip me to make an informed decision. This approach reflects what we call 'human-in-the-loop' design - essentially, how do we build AI that informs rather than replaces human judgment?
And voila - we have an agent.
You can think of this solution as similar to what happens when you ask for an opinion from a friend. They’ll do their best to understand what you like and what you don’t like in how they might make a recommendation. But they won’t always be right. AI is not one-hundred percent in the same way you or I aren’t either. This was part of the reason I wanted transparency to understand the reasoning my agent put in front of me. As I QA’ed the output, my agent proved right enough times that I have confidence in how it understands my preferences. But my job is still to weigh how I feel about the feedback it lays out.
For example, I recently asked: “How do we feel about the book “Sword Catcher” by Cassandra Clare?” According to my agent, this book would likely not be a good fit for me. Potential issues with slow pacing, violence, and dense setup were cited. I thought the premise of the book was interesting enough though that I still gave it a try and I’m forty-percent through and enjoying it. And this is the point. I had enough information be able to weigh my decision and understand what I was getting into. If I hated the book after the first few chapters, that evaluation would have been equally equipped to justify putting the book away. Also probably a lot sooner.
More often, the agent’s feedback has been spot on. It often also provides additional value beyond a simple yea or nay evaluation. Several times when steering me away from a book, my agent has included recommendations it thinks I will enjoy more. This delightfully has enabled me to discover titles I might not have otherwise found. Recently, this included “Something in the Heir” by Suzanne Enoch, a playful and hilarious Victorian novel about a couple that pays to borrow a couple of orphans to satisfy an inheritance requirement and realize that their simple solution comes with complications they didn’t anticipate. This was a light and fun read and something I would have missed if it weren’t for my agent’s input.
The most profound impact of this solution for me though is culling my “to read” list to something that feels more sane and less overwhelming. For the last several years it’s hovered around 130-150 titles. Last year, I made a New Years resolution to get that to sub 100, just so it would feel manageable. Which given the rate at which I read, should have been something I accomplished easily in the first half of the year.
But I kept adding titles.
These came from recommendations, book release notices - yes, my Google feed has me figured out and other various sources. So I never felt like I could actually make headway on that goal.
Which only led me to trying to read more furiously, not really enjoying reading because it became a chore. All the while, I felt like I was failing since according to the metrics of my goal, I wasn’t making progress.
For you, the goal is likely something different. I would guess though that you can relate to frustration that happens when you’re trying to make a breakthrough and just keep missing. Maybe it’s fitness or nutrition, maybe it’s learning a language or a new skill, or maybe it’s trying to figure out how to declutter your closet. Whatever your struggle, the inspiration I hope you will take away from this post is that maybe with AI technology, you now have help in how you tackle it.
After I ran my “to-read” list through my agent, I was able to narrow 130 titles down to 78 and am hovering around 80 right now. Which feels reasonable, not overwhelming, and allows me to indulge in my books with the confidence that I can and will enjoy doing so. I also get to add titles to my “to-read” list without adding to a growing sense of overwhelm. This feels like a win as well as saves me the time I would have spent on books that, per my preferences, may not have been worth my time.
Which, delving into the topic of time a bit more, is something notable in this era of AI. With the recent advances in technology, time also seems to be accelerating. We have the tools to do things faster. Examining something in detail or delving into deep nuances is something I find myself struggling sometimes to be okay with. Similar to that annoying mantra in my mind I described when reading a book I’m not enjoying “this has to get better” when I’m taking my time working through something, often “why can’t I get this done faster” becomes that refrain.
Faster has become our new norm. To offer a PSA on this, it’s okay to slow down. It’s okay to read, to digest, to make sure you understand something in the way that you need to. And as we work faster and faster, we equally need to make sure we factor in breaks in how we balance our interactions with novel technologies.
A recent (Feb 9, 2026) study by the Harvard Business Review found that AI increases work intensity. Employees in organizations that have adopted AI are working faster and for longer hours. They tend to take on broader responsibilities. Which may seem on the surface as accomplishing the paradigm shift most companies want to leverage this technology to support. But the other side to this is that AI is also supporting more constant, breakless, and multitasking work. Given that we’re still humans (or at least were last time I checked) this creates cognitive fatigue and can lead to burnout. Which is an important thing to be aware of in how we personally manage our capacity as well as pay attention to how we’re feeling and what we need.
I’m definitely the pot calling the kettle black here and trying to make a conscious effort to be aware of this in myself. I’ve had my share of nights and weekends where I’ve found myself digging into some wormhole or another and spending more time lately on work. Part of this is because I love what I do but that doesn’t mean the balance isn’t slightly skewed at the moment. A side effect of this is that my time outside of work has become inherently more valuable. Which is why a reading agent was something I found a lot of value in creating for my personal use.
Hopefully this post inspires you to do something similar. Just remember, as you create your solution, make sure to match the resources to the use case and build a framework that will be appropriate for whichever AI optimization fits your needs.
I can’t wait to hear all about it.