Transparency
How guides are made
Handiy guides are generated using AI. We want to be clear about what that means in practice — what makes them useful, where they fall short, and how we’re working to improve them.
The process
Every Handiy guide is generated by Anthropic’s Claude language model using a structured prompt that constrains the output to a specific format: appliance type, symptom, probable causes, required tools, numbered steps, callout warnings, pro tips, and safety notices.
We don’t ask the model to write freely. The prompt specifies exactly what sections to produce and what each section must contain — so every guide follows the same structure regardless of appliance or symptom.
Structured prompting
The prompt includes the appliance type, symptom, and — when provided — the brand and model number. For model-specific guides, the AI is instructed to reference the exact panel location, fastener types, and known quirks for that model. A guide for a Samsung RF28 refrigerator should not read the same as one for an LG LRMVS.
Diagnostic context
When a guide is generated from the diagnostic tool, the full path of the user’s answers (e.g. “Not cooling → Compressor runs → Ice forming at back”) is passed as additional context. This narrows the guide to the specific failure mode the user observed, rather than a generic overview of the symptom.
Grounded in technical knowledge
Claude is trained on a large corpus of technical documentation, service manuals, repair forums, and engineering literature. Handiy prompts are designed to draw on that domain knowledge specifically — not to produce generic prose, but step-specific repair guidance grounded in how appliances are actually built.
Saved and reused
Once generated, a guide is saved to our database and served to all future users who search for the same appliance, symptom, or model. This means the library grows with usage — a guide generated for a Nespresso Vertuo not brewing today is available to the next person who searches for it tomorrow.
Limitations you should know
AI-generated repair guides have real limitations. We display a disclaimer on every guide page and we want to be specific about what those limitations are:
- AI can be wrong. Claude can produce plausible-sounding instructions that are incorrect for a specific model, outdated, or based on a misunderstanding of the failure mode. Always cross-reference with your appliance’s manual.
- Model coverage varies. Guides for common brands (Whirlpool, Samsung, LG, GE) are likely to be more accurate than guides for obscure or regional brands with less representation in training data.
- Parts change. Manufacturer part numbers are superseded regularly. We do not verify that part references in guides are current. Always confirm part numbers against your appliance’s serial number before ordering.
- We do not manually review every guide. Guides are generated on demand. We rely on user feedback and a training signal from photo scan confirmations to surface issues, not manual editorial review.
How we improve over time
Every completed diagnostic session records anonymous data about the appliance type, symptom path, and guides shown. When users confirm or correct photo scan results, that signal feeds back into how we rank and surface guides.
We do not store photos. We do not store raw IP addresses. Anonymous session data contains no personally identifying information. See our Privacy Policy for the full breakdown of what is and isn’t collected.
If you find an error in a guide, report it. We investigate every report and update guides where the AI got something wrong.
See it in action
Run the diagnostic tool or browse existing guides to see what the output looks like for your appliance.