The pharmacy calls one person in the household when his prescriptions run out.
It does not call the other.
This is not because the second person does not exist. The second person is very much present in the world. She pays attention to time. She notices when pills are getting low. She possesses a phone that rings, vibrates, lights up, and occasionally interrupts dinner for reasons far less medically important.
Yet the pharmacy’s phone system—apparently capable of tracking the slow disappearance of tiny tablets across weeks—has surveyed reality and concluded: this individual does not require notification.
The first person, however, clearly does.
When his pills approach extinction, the system awakens. Somewhere, a server hums. A rule fires. A voice—pleasant, calm, and impossible to interrupt—reaches out across the network to say, “Hello. We’re calling to let you know—”
The second person hears this only indirectly, filtered through the domestic environment, like weather news overheard from another room.
Her own prescriptions, meanwhile, inspire no such concern. They are apparently expected to live independently, like houseplants that will water themselves.
This invites questions.
Does the system believe she prefers surprise? Does it imagine she enjoys discovering an empty bottle at 9:47 p.m. on a Sunday? Has it classified her medications as self-renewing, or perhaps metaphysical?
Or—and this is the most charitable theory—has the system decided that one set of prescriptions is operational, while the other is merely conceptual?
Because systems love categories. They thrive on them. They do not so much perceive people as they do columns, flags, and conditional statements. Somewhere in a database, one person is not “someone who might appreciate a reminder,” but a record attached to a plan attached to a rule attached to a footnote attached to a lawyer’s mild anxiety from years ago.
The other person, meanwhile, is attached to a slightly different plan, one containing a checkbox that reads something like:
☑ HUMAN, PROBABLY FORGETFUL
☑ CALL WHEN PILLS NEAR OBLIVION
And so the calls come.
What makes this funny is not that the system fails. It works beautifully. It executes its logic flawlessly. It does exactly what it was designed to do. It simply does not share our understanding of what that design was meant to serve.
To an ordinary person, a prescription running out is a practical problem requiring advance notice.
To the system, it is a contractual event whose notification behavior depends on how the prescription entered the building, what year it was first typed in, and whether a particular box was left unchecked during a lunch break three software versions ago.
The system is not malicious. It is not even indifferent. It is faithful. Faithful to rules no one remembers agreeing to, designed to mitigate risks no one currently recalls, enforced by people who are not empowered to change them.
And so the unnotified person does what modern adults so often do in these situations: she notices the absurdity, briefly considers asking someone to fix it, and then decides it would take less time to simply keep track herself.
Which, of course, is the system’s quiet triumph.
The pharmacy will continue to call one person, reliably and punctually, with genuine concern.
The other will continue to exist in the silent spaces between those calls—perfectly capable, entirely unnotified, and faintly amused that a machine sophisticated enough to manage medication still hasn’t figured out how to recognize the person standing right in front of it.
