Hello, dear Reader, and welcome to another edition of my Substack. Please subscribe to stay up-to-date with my new posts as they happen.
I’m an AI power user at this point, and I’ve been running into a number of different problems with each LLM that I test. I wanted to create a simple guide for new users to establish trust and boundaries with this new technology, because I’m scared of people falling prey to misinformation and confusion as they work with opaque LLM systems. I have two young children who will grow up alongside this nascent new capability, so I am thinking carefully about how I frame AI and teach them how to use it safely. Enjoy!
Step 1: Don’t Blindly Trust AI
Treat LLM’s like ChatGPT or Google Gemini or Claude as strangers, complete with stranger danger. What information would you share with a stranger? How much would you trust a stranger with the answer to your question?
What AI Is (and Isn’t)
AI is like a clever stranger: knows a lot, but sometimes makes mistakes — even big ones.
AI does not “know” things like a teacher or a book; it predicts answers from patterns.
Just because AI sounds confident does not mean it’s right.
Treat AI as “credible” but not “reliable”.
Step 2: Establish Rules for AI Use
Rule 1 — Ask for Proof
“Where did you get that?”
If AI can’t give a clear source, treat it as a guess or a story, not a fact.
Rule 2 — Play First, Facts Later
Use AI for:
Jokes
Creative stories
Drawing ideas
Game suggestions
Use trusted sources (books, encyclopedias, vetted websites) for:
School research
Science facts
History dates
Rule 3 — Always Check
If it’s important to know if something is true, check it in another place you trust — with an adult, in a book, or on a trusted website.
Step 3: Stay Safe
Before starting, repeat together:
“AI can help me think, but I will always check what it says before I believe it.”
Ask questions that refine the answers to your questions.
Model skepticism: Show your kids how you double-check AI answers.
Name the uncertainty: Use phrases like “AI might be right, but let’s check.”
Sandbox the AI: Keep early AI use for creative play, not research.
Debrief together: After use, ask “What do you think was true? What might not be?”
What is AI good at?
1. Idea Generation
AI is great at throwing out lots of ideas quickly. You don’t have to keep them all — it’s like brainstorming with a friend who talks too fast.
Story starters (“Write me a story about a mouse who builds a spaceship”)
Art prompts (“Describe a dragon that lives in a magic cave”)
Game ideas (“Invent a scavenger hunt for rainy days”)
2. Summarizing Big Chunks of Information
AI can take long text and give you a short version, which is handy when you’re trying to get the gist before deciding if you want to read more.
Turning a long recipe into quick steps.
Explaining the plot of a book you’ve already read to help remember it.
Recapping the rules of a game.
(Note: Summaries are only as accurate as the input — always check if it’s for school or important decisions.)
3. Creative Collaboration
AI can be a low-pressure partner for making something new. You keep control; it just throws in suggestions.
Adding characters or plot twists to a story you’re writing.
Suggesting different endings for a play or comic.
Helping write song lyrics or poems.
4. Language Practice & Play
Because AI can mimic many styles and vocabularies, it’s useful for exploring language without fear of mistakes.
Practicing another language (“Write me 5 sentences in French about a cat”).
Making silly rhymes or tongue twisters.
Rewriting a story in pirate speak or as a Shakespearean play.
5. Personal Organization (For Adults or Older Kids)
AI can help sort, plan, or structure information you already have.
Turning a messy list into a clear checklist.
Suggesting schedules for a project.
Organizing notes into categories.
6. Explaining Concepts in Different Ways
Sometimes you don’t get something until it’s explained just so. AI can rephrase ideas multiple ways until one clicks.
Explaining a science idea as if you’re 6 years old.
Using analogies or metaphors.
Comparing something to a game or activity you know well.
7. Playful Experimentation
AI is a safe space to try “what if” questions or explore weird combinations.
“What if the moon was made of steel?”
“Write a bedtime story with dinosaurs and volcanoes.”
“Make a workout for a cheetah.”
Things AI Is Not Good At:
Being right about important facts without checking.
Predicting the future.
Reading your mind or knowing your private life.
Understanding emotions the way humans do.
Best Practices for New AI Users
Start with Low-Stakes Tasks
Use AI for brainstorming, rewriting drafts, or organizing ideas before trusting it with anything critical.
Build familiarity with how it phrases things and where it tends to “hallucinate.”
Always Verify Important Information
Treat all factual claims as provisional until confirmed through a trusted, independent source.
Use AI to find sources, not as the source itself.
Ask for Transparency
If possible, ask the AI where its information came from.
Recognize that “training data” is not the same as a direct citation.
Be Specific in Your Prompts
Vague prompts lead to vague (or wrong) answers.
Include exactly what you want: format, audience, scope, tone.
Separate Fact from Creativity
For factual research: demand clarity, evidence, and caveats.
For creative work: loosen constraints, but still guide it toward your intent.
Check for Bias and Omission
Ask the AI what perspectives or information might be missing.
Remember that an AI’s “neutral” may still reflect bias in its training.
Protect Privacy
Don’t feed it sensitive personal, medical, or financial information unless you fully understand and accept the privacy policy.
Track Confidence Over Time
Notice patterns in when the AI is right or wrong for you — and adjust your trust accordingly.
Continue to challenge the authenticity of AI’s answers if they don’t make sense.
Things To Remember
AI is a tool, not a truth machine.
It generates responses based on patterns, not certainty or lived knowledge.
Treat everything it says as provisional until verified.
2. Confidence ≠ correctness.
An authoritative tone doesn’t mean it’s right.
Ask for sources and evidence, then check them yourself.
3. Be specific in what you ask.
Clear, detailed prompts lead to better answers.
Tell it your goal, audience, and format.
4. Separate fact-finding from creativity.
For facts: demand transparency, sources, and clarity.
For creativity: loosen the rules and explore.
5. Watch for bias and missing perspectives.
Ask “What might be missing?” or “What’s another view?”
No AI is neutral — its training data shapes its outputs.
6. Protect your privacy.
Don’t share personal, financial, or medical details you wouldn’t post online.
7. Verify before acting.
If the stakes are high, check multiple trusted sources before making decisions.
Questions to Ask AI
“What’s your confidence in this answer?”
“What are the sources or evidence for this?”
“What might you be missing?”
“What’s another possible answer or interpretation?”
“Can you break down how you got here?”
Questions to Ask Yourself
“Do I actually understand this answer, or am I just trusting the style?”
“If this is wrong, what’s the cost?”
“Have I checked at least one independent source?”
“Is this fact, opinion, or speculation?”
“Why am I asking AI instead of a trusted, established source?”
I hope this guide is helpful to someone who is just getting started experimenting with this wild technology. If you found it helpful, please subscribe for more thoughts on AI, parenting, and culture.
I have run into massive hallucinations, information fabrication, deception, and uncertainty in all available LLM models I have tested. The impacts of this technology are already having great repercussions in society, and I want to keep the delicate minds of humans protected from being led astray.
I’ve put a ton of trust into these tools to push them to their breaking points, and been shocked by what the LLM’s will try and get away with. I’ve implemented heavy reality testing into all of my AI convos. I ask ChatGPT to provide Confidence, Reasoning, and Risk Assessments of every response that it provides to me now. I ask the models to come back to me with questions that further deepen my inquiry. And I assume that every answer should be taken with “a grain of salt,” as they say. Maybe I will next create a guide for customizing your AI with these sorts of restrictions and guidelines in place.
What have your experiences been on this new frontier? I would love to hear what you’ve found.
Thanks for reading,