A golden retriever comes in for lethargy. The owner mentions, in passing, that the dog has been drinking more water for about three weeks and had a brief limp last month. The exam runs fifteen minutes. Bloodwork is ordered. Two weeks later the dog returns, worse. The limp detail never made it into the chart.
That is how veterinary medicine works under real-world time pressure. Patients cannot speak. Owners remember fragments. The chart captures a fraction of what was actually said in the room.
The Problem
Veterinarians in general practice routinely see 20 to 30 patients a day, often with 15-minute appointment slots. Every case depends on a verbal history delivered by someone who is stressed, distracted, and not medically trained. The owner talks for several minutes. The vet types a few lines. Whatever did not make it into the typed line is gone the moment the next patient walks in.
The details that get lost are usually the ones that matter. Diet changes. Subtle behavioral shifts. Prior symptoms that resolved on their own. Medications given by another household member. What the breeder said at adoption two years ago. How the previous vet explained the cardiac murmur. Any one of these can change a differential and steer a workup down the right path on the first visit instead of the third.
Across a pet's lifetime, a clinic may accumulate 40 or more visits. Rotating vets within a practice see the same patient with no memory of the last conversation. Specialists get a referral summary that compresses hours of context into three sentences. Emergency clinics see the patient cold. By the time a case becomes complex, the full story is already gone — not because anyone was careless, but because nobody had a way to keep it.
The cost is not just diagnostic. It is also relational. Owners who repeat the same history at every visit eventually feel like the clinic does not know their pet, even when the team genuinely cares. Trust slowly erodes, and so does retention.
Why Current Solutions Fall Short
Most clinics already have tools that touch part of this problem. None of them solve it.
- Typed SOAP notes compress aggressively. Vets write during or just after the appointment. Under time pressure they capture the diagnosis and plan but rarely the owner's exact wording — and the owner's exact wording often contains the diagnostic clue. "He's been a little off" reads cleanly in a chart but loses every distinguishing detail the owner actually said.
- Practice management systems store structured data, not narrative. PIMS platforms like Cornerstone, Avimark, and ezyVet handle billing, vaccines, weight history, and reminders well. They do not preserve the conversation that produced the diagnostic decision in the first place.
- Generic dictation tools miss veterinary vocabulary. Terms like pyometra, cruciate, IMHA, hemangiosarcoma, lepto titer, proparacaine, gabapentin dosing, and breed-specific syndromes get mistranscribed by general-purpose speech tools, forcing the vet to rework the note instead of trusting it.
- Rotating staff lose continuity. When the patient sees a different vet, tech, or receptionist on the next visit, the prior context lives only in someone else's memory. Multi-vet practices, relief vets, and weekend coverage all hit the same wall.
- Hand-written notes do not survive audits or referrals. They are also unsearchable, which makes "when did this dog first limp" a question nobody can answer in fifteen minutes.
What Actually Works
Veterinary teams do not need another data-entry tool. They need a recording and documentation layer that captures the full exam conversation, understands clinical vocabulary, and makes years of prior visits searchable in seconds. Four pieces matter.
Accurate transcription of clinical vocabulary
AmyNote uses OpenAI's latest Speech API, which handles veterinary terminology well. Drug names, breed-specific conditions, and dosages come through cleanly. Not perfect — no transcription is — but accurate enough that the vet can trust the transcript without doing a full second-pass edit. The threshold matters. A transcript that needs rewriting is worse than no transcript at all.
Speaker identification with cross-visit memory
The system separates vet, tech, and owner within a single appointment, and remembers returning owners across visits. When Mrs. Chen comes back six months later with a new concern, the vet can pull up her last three conversations in seconds and walk into the room already knowing what was said about the cat's appetite, the husband's allergies, and the trip to the in-laws that disrupted the feeding schedule. That is the difference between feeling known and feeling processed.
AI summaries and semantic search
Anthropic's Claude Opus generates a structured SOAP draft from each appointment, plus a plain-language client communication summary and a referral-ready history that a specialist can actually read in under a minute. Search works across every recorded visit: query "when did this dog first limp" and get the exact timestamp from an appointment fourteen months ago, not a list of fifty-eight unrelated chart entries to scroll through.
Privacy architecture that holds up
Both OpenAI and Anthropic contractually guarantee zero training on user data. Audio is encrypted in transit and not retained on provider servers after processing. Transcripts are stored locally on the clinic's device with end-to-end encryption. No patient audio sitting on a third-party server. For practices that share hospitals or refer to specialty centers, the export and audit trail also matters — both are first-class features rather than afterthoughts.
Getting Started
Veterinary practices evaluating this kind of workflow should test it during a normal clinic day, not a sales demo. The right evaluation looks for three things.
- Transcription quality on drug names and breed-specific terms. Run a real exam with at least one prescription, one breed-specific condition, and one owner-reported behavioral change. Read the transcript end to end. If you find yourself fixing more than two or three lines, the tool is not ready for the room.
- Searchability across prior visits. After two weeks of recordings, try to answer a question that historically would have required scrolling through chart entries. "When did the polyuria start" or "did we ever discuss switching the joint supplement" are good tests. The answer should appear in seconds.
- Whether the AI summary matches the way your vets actually write SOAP notes. Every clinic has a house style. The summary should fit it within minor edits, not require rewriting from scratch.
The right documentation layer gives back twenty minutes per vet per day and keeps the family story attached to the patient, not trapped in somebody's head.
AmyNote offers a 3-day free trial with no credit card. The pet history gap is not a clinical failure — it is a tooling failure. Closing it is the difference between catching the limp on visit one and catching it on visit three.
Try AmyNote at amynote.app. The trial is enough to know within a single clinic day whether the workflow fits how your team actually practices.
Originally published as an X Article.


