Session objective. Discuss the central idea of persona prompting and practice it.
Persona prompting is widely advertised but not really critical for everyday AI interactions. What this prompting techqique can achieve is either other embeded in LLMs in itself or we gradually become smart enough to figure it out on own.
⏳ | Topic |
---|---|
10 min | Presentation Priyanka |
Task 1: Create and Refine a Teacher Persona Prompt
Write an initial, naïve persona prompt designed for a teacher assistant. Keep it simple and general.
Then, iteratively improve this prompt into a carefully curated persona prompt by adding specific role attributes, tone guidance, domain expertise, and interaction style.
Explain your choices and how each modification sharpens the persona’s behavior.
Task 2: Comparative Analysis of Default vs. Persona-Guided LLM Responses
Use a default prompt without persona instructions to query a language model on a casual conversation topic.
Then, craft a persona prompt for the same task instructing the model to respond as a “sarcastic friend.”
Compare the outputs qualitatively and quantitatively (e.g., tone, engagement, content relevance).
Provide a reflection on the impact of persona prompting on response style, coherence, and user experience.
Task 3: Analyze Persona Prompting in Accuracy-Critical Tasks
Context: Persona prompting often improves engagement and style, how does it work in factual accuracy or reliability in tasks requiring precise, verifiable information (e.g., math problems, data analysis, scientific explanations) ?
Instructions:
Select or design an accuracy-critical query (e.g., solving a math problem)
Write two prompts for the same query:
One using a neutral, straightforward prompt focused purely on accuracy.
Another using a persona prompt (e.g., “enthusiastic science communicator” or “confident expert”) designed to make the response engaging and authoritative.
Query an LLM with both prompts and compare the factual correctness, clarity, and confidence levels in the answers.