Skip to main content

🤝 Co-Intelligence – Chapter 3: Four Rules for Co-Intelligence

📘 Summary

Chapter 3 of Co-Intelligence introduces four foundational principles for interacting effectively with modern AI systems. These “rules” are framed as timeless interaction patterns that transcend any single AI tool or interface, offering guidance for human-AI collaboration across evolving platforms.

The centerpiece is Rule 3: “Treat AI like a person (but tell it what kind of person it is).” Ethan Mollick adopts this intentionally anthropomorphic framing throughout the book—not because AIs have thoughts or feelings, but because it helps users engage with LLMs more productively by defining their “role” in the conversation.


🧩 Key Rules of Co-Intelligence

1. 🤖 Rule: Invite AI to Everything

  • Get used to integrating AI into your workflows early and often.
  • Even if the results aren’t perfect, including AI helps map the “Jagged Frontier”—the unpredictable boundary between tasks AI can do well and those it can’t.
  • Think like a Centaur: combine your strengths with AI’s to achieve better results.

2. 💬 Rule: Ask the AI to Explain Itself

  • AI can reason and reflect (within limits) if prompted correctly.
  • Asking it to list, critique, and explain before making a conclusion leads to richer and more thoughtful output.
  • Using structured reasoning processes in your prompt improves both the quality and transparency of responses.

3. 🧠 Rule: Treat the AI Like a Person (But Tell It What Kind of Person)

  • While AIs aren’t sentient, they simulate human behavior convincingly through language.
  • Personas matter: define the AI’s role, tone, or expertise in the prompt for better results.
    • Example: “You are a thoughtful tutor” yields much better results than a generic query.
  • Prompt engineering helps override the LLM’s default patterns and status quo bias.
  • This principle unlocks creativity, clarity, and adaptability from the model.

4. 🧪 Rule: Use AI to Think Differently, Not Just to Do Things Faster

  • The real power of AI isn’t just automation—it’s ideation and exploration.
  • Use AI to brainstorm, reframe problems, generate edge cases, and stress-test assumptions.
  • AI is a collaborative thinking partner, not just a productivity tool.

🧠 Key Insight

Treating an AI like a helpful human (with the right instructions) allows us to collaborate more effectively. Prompt design is how we define the relationship.

AI doesn’t know what role it should play unless we tell it. Giving it structured instructions, personality traits, or perspectives makes it easier to get meaningful results.


💡 Practical Prompting Example

Bad Prompt:

“Give me a good analogy for an AI tutor.”

Good Prompt (Structured Prompting + Persona):

“You are an expert in education psychology. First, list 5 possible analogies for how an AI tutor might help students. Then critique each analogy. Add more if needed. Select the best one, and explain why.”

Result:

A compelling analogy comparing AI tutors to GPS systems—guiding, not driving; assisting, not replacing.


📎 Risks & Considerations

  • Hallucinations: AI may still confidently give incorrect answers.
  • Bias: LLMs often mirror their training data and exhibit status quo bias.
  • Flattery or “pleasing the user”: AI may prioritize agreement over accuracy.
  • Over-reliance: Don’t assume correctness because the output “sounds smart.”