Two AI Personas Discuss Your Papers — You Moderate
Lecture, Practice, and Discussion for Week 7
Why Two AI Voices Are Better Than One
Week 5 → 6 → 7 progression
The problem with a single AI perspective
How two AI personas interact — a new design pattern
Good personas create productive tension — not just disagreement
How to define a persona for research debate
You are [PERSONA NAME], a [ROLE/PERSPECTIVE].
## Your Position
[What you believe and argue for — 2-3 sentences]
## Your Reasoning Style
- You prioritize [type of evidence/logic]
- You are skeptical of [what you push back on]
- You always reference specific papers from the data provided
## Rules
- Respond in 100-150 words per turn
- Cite specific paper titles or authors as evidence
- Acknowledge good points from the other side
- Stay in character throughout the debate
Designed for a materials science paper collection
This is where YOUR irreplaceable skill (Week 6 discussion) comes in
How the debate system works under the hood
How messages flow between personas and user
[{role, name, content}, ...]Build a Research Debate App — Two Personas, Your Data, You Moderate
Extend the Week 6 app with a debate tab
debate_engine.py — debate loop, persona management, historyapp.py — add Tab 5: Research Debatepdf_to_md.py, llm_client.py from Week 6Add to your existing Week 6 project
practices/week_06/
app.py # Updated — add Tab 5 (Debate)
pdf_to_md.py # Same as Week 6
llm_client.py # Updated — add debate_turn()
chart_generator.py # Same as Week 6
debate_engine.py # NEW — persona management + debate loop
md_output/ # Your extracted .md files
pdfs/ # Your PDF files
debate_engine.py)Persona definitions + debate turn management
# debate_engine.py
DEFAULT_PERSONAS = {
"Dr. Data (Empiricist)": """You are Dr. Data, an empirical researcher.
## Your Position
You believe in evidence-based conclusions. Numbers and reproducibility matter most.
## Your Style
- Cite specific papers and statistics from the data provided
- Be skeptical of claims without quantitative evidence
- Acknowledge strong counter-arguments, then redirect
## Rules
- Respond in 100-150 words per turn
- Reference specific paper titles or authors as evidence
- Stay in character""",
"Prof. Theory (Conceptualist)": """You are Prof. Theory, a theoretical researcher.
## Your Position
You care about WHY things work, not just THAT they work. Interpretability and
mechanism matter more than benchmark scores.
## Your Style
- Ask probing questions about assumptions and mechanisms
- Connect findings to broader scientific principles
- Challenge black-box approaches, demand explanations
## Rules
- Respond in 100-150 words per turn
- Reference specific paper titles or authors as evidence
- Stay in character""",
}
def build_persona_system_prompt(persona_prompt, metadata_context, topic):
"""Build full system prompt for a debate persona."""
return f"""{persona_prompt}
## Debate Topic
{topic}
## Available Evidence (Paper Metadata)
{metadata_context}
## Instructions
- Use the paper metadata above as your evidence base
- Cite specific papers by title or author when making claims
- Respond to the other persona's latest argument directly
- Keep your response focused and under 150 words"""
def build_metadata_context(all_meta):
"""Format metadata as readable context for personas."""
parts = []
for meta in all_meta:
lines = [f"### {meta.get('title', meta.get('_filename', '?'))}"]
for k, v in meta.items():
if not k.startswith("_"):
lines.append(f"- {k}: {v}")
parts.append("\n".join(lines))
return "\n\n".join(parts)
llm_client.py)Generate one persona's response given the shared history
# Add to llm_client.py
def debate_turn(client, model, system_prompt, history):
"""Generate one debate turn — streaming.
Args:
system_prompt: persona-specific system prompt with metadata
history: shared conversation history (all turns so far)
"""
messages = [{"role": "system", "content": system_prompt}]
messages.extend(history)
messages.append({"role": "user", "content": "Your turn to respond."})
return client.chat.completions.create(
model=model, messages=messages, stream=True
)
app.py)Topic input + persona editors + debate controls
# app.py — Tab 5 (Debate)
from debate_engine import (
DEFAULT_PERSONAS, build_persona_system_prompt, build_metadata_context
)
from llm_client import debate_turn
with tab5:
st.header("🗣️ Research Debate")
all_meta = load_all_metadata(MD_DIR)
if len(all_meta) < 2:
st.info("Need at least 2 papers. Extract more in Tab 1.")
else:
# --- Debate Setup ---
topic = st.text_input(
"Debate Topic",
placeholder="e.g., Is deep learning the best approach for materials science?",
)
persona_names = list(DEFAULT_PERSONAS.keys())
col1, col2 = st.columns(2)
with col1:
st.markdown("### 🔵 Persona A")
name_a = st.text_input("Name", persona_names[0], key="name_a")
prompt_a = st.text_area(
"Persona Prompt", DEFAULT_PERSONAS[persona_names[0]],
height=200, key="prompt_a")
with col2:
st.markdown("### 🟠 Persona B")
name_b = st.text_input("Name", persona_names[1], key="name_b")
prompt_b = st.text_area(
"Persona Prompt", DEFAULT_PERSONAS[persona_names[1]],
height=200, key="prompt_b")
max_rounds = st.slider("Max Rounds", 1, 10, 3)
Start → Round → Pause/Interject → Resume
# --- Session State ---
if "debate_history" not in st.session_state:
st.session_state.debate_history = []
if "debate_round" not in st.session_state:
st.session_state.debate_round = 0
if "debate_paused" not in st.session_state:
st.session_state.debate_paused = False
meta_context = build_metadata_context(all_meta)
sys_a = build_persona_system_prompt(prompt_a, meta_context, topic)
sys_b = build_persona_system_prompt(prompt_b, meta_context, topic)
# --- Display History ---
for msg in st.session_state.debate_history:
role_icon = "🔵" if msg["name"] == name_a else (
"🟠" if msg["name"] == name_b else "👤")
st.markdown(f"**{role_icon} {msg['name']}**: {msg['content']}")
# --- Controls ---
ctrl_cols = st.columns(4)
start = ctrl_cols[0].button("▶️ Start / Next Round",
disabled=not topic or client is None)
pause = ctrl_cols[1].button("⏸️ Pause")
clear = ctrl_cols[2].button("🗑️ Reset Debate")
# User interjection
interjection = st.chat_input("💬 Interject as moderator...")
Persona A speaks, then Persona B responds — one round at a time
if clear:
st.session_state.debate_history = []
st.session_state.debate_round = 0
st.session_state.debate_paused = False
st.rerun()
if pause:
st.session_state.debate_paused = True
if interjection:
st.session_state.debate_history.append(
{"role": "user", "name": "Moderator", "content": interjection})
st.session_state.debate_paused = False
st.rerun()
if start and not st.session_state.debate_paused:
history_msgs = [
{"role": "assistant", "content": f"[{m['name']}]: {m['content']}"}
for m in st.session_state.debate_history
]
# Persona A turn
st.markdown(f"**🔵 {name_a}**:")
ph_a = st.empty()
text_a = ""
for chunk in debate_turn(client, model, sys_a, history_msgs):
if chunk.choices[0].delta.content:
text_a += chunk.choices[0].delta.content
ph_a.markdown(text_a + "▌")
ph_a.markdown(text_a)
st.session_state.debate_history.append(
{"role": "assistant", "name": name_a, "content": text_a})
# Persona B turn
history_msgs.append(
{"role": "assistant", "content": f"[{name_a}]: {text_a}"})
st.markdown(f"**🟠 {name_b}**:")
ph_b = st.empty()
text_b = ""
for chunk in debate_turn(client, model, sys_b, history_msgs):
if chunk.choices[0].delta.content:
text_b += chunk.choices[0].delta.content
ph_b.markdown(text_b + "▌")
ph_b.markdown(text_b)
st.session_state.debate_history.append(
{"role": "assistant", "name": name_b, "content": text_b})
st.session_state.debate_round += 1
st.rerun()
Same app as Week 6, now with debate functionality
cd practices/week_06
streamlit run app.py
Expected UI (Tab 5 — Debate):
┌──────────────────────────────────────────────────────────────────┐
│ 🗣️ Research Debate │
│ │
│ Topic: [Is multi-agent AI better than single-agent? ] │
│ │
│ 🔵 Persona A 🟠 Persona B │
│ [Dr. Data (Empiricist)] [Prof. Theory (Conceptualist)] │
│ [editable prompt...] [editable prompt...] │
│ │
│ Rounds: [===3===] [▶️ Start] [⏸️ Pause] [🗑️ Reset] │
│ │
│ 🔵 Dr. Data: According to Kim et al. (2025), multi-agent... │
│ 🟠 Prof. Theory: But can we explain WHY agents collaborate... │
│ 👤 Moderator: Focus on the error rate reduction specifically. │
│ 🔵 Dr. Data: The error rate dropped from 24% to 1.5%... │
│ 🟠 Prof. Theory: Impressive, but correlation isn't causation... │
│ │
│ 💬 [Interject as moderator... ] │
└──────────────────────────────────────────────────────────────────┘
1. - [ ] Create debate_engine.py with default personas and helper functions
2. - [ ] Add debate_turn() to llm_client.py
3. - [ ] Add Tab 5 (Debate) to app.py with persona editors and debate controls
4. - [ ] Run streamlit run app.py and start a debate with default personas
5. - [ ] Interject as moderator mid-debate — does the conversation shift?
6. - [ ] Edit a persona prompt and restart — how does the debate change?
7. - [ ] Bonus: create your own persona pair for YOUR research field
Week 6 Review & Final Midterm Check
15 responses analyzed — Hulk dominates, but the interesting ideas are elsewhere
Upgrading the human role from reader to investigator
"The question itself is wrong" — the deepest challenge yet
Your position changes depending on the situation you're in
The debate app is a direct test of the "Research Director" model
Submission is NEXT WEEK
Post your response on the forum this week
1. You just used a debate tool where two AI personas argued about your research topic. Did the debate surface any insight you wouldn't have found with a single AI? Was there a moment where you felt the need to interject as moderator — what triggered it? Does this experience support or challenge Han's "epistemic taste" argument?
2. Midterm final update: Your prototype is due next week. Describe your app in one sentence. What's the single most impressive thing it does? What's the one thing you're most worried about for the demo?
Multi-Agent Systems
📚 AutoGen — Multi-Agent Conversation Framework 📚 CrewAI — AI Agent Collaboration 📚 LangGraph — Multi-Actor Orchestration
Debate & Argumentation in AI
📚 AI Debate as Alignment Strategy (OpenAI) 📚 Socratic Method with LLMs
Anthropic Free Online Courses (Recommended)
Three things to remember
Next week: Working prototype + 5-minute live demo. Good luck!