Other Publications

Education Columns

Putting the Cart Before the Horse: Learner Satisfaction and Faculty Promotion

Karly Pippitt, MD, University of Utah; David Norris, MD, University of Mississippi Medical Center

An article published in 2013 titled “Why Rating Your Doctor Is Bad for Your Health” discussed evidence that patient satisfaction scores do not have a positive correlation with quality of care. Evidence suggests that highly rated physicians may have worse patient outcomes at a greater cost. Too many patients believe they see physicians to get what they want, such as antibiotics or pain medications, rather than to consult with an expert to improve their health. We agree with the authors that more information about patient outcomes is needed to determine if patient satisfaction should be a significant determinant of physician quality and by extension, physician pay. And this got us thinking:

Have we created the same situation in medical education?

Learner satisfaction, like patient satisfaction, may have as much, or more, to do with fulfilling personal desires as it does with professional training needs. Obtaining feedback from students about the quality of teaching is an important part of being a medical educator; however, students need to understand that often these comments are being used for more than just feedback to the lecturer; they are used in the promotion and tenure (P&T) process.

Feedback from students can often take a personal tone that does not provide any constructive information for an educator to act upon; for example, one of us received a comment that said the faculty member “made no effort to teach us. I barely managed the course only because of all the extra time I personally put in.” Of note, this was part of a self-directed learning experience. Some comments can be painful to read but when put in perspective are unprofessional in nature and show a lack of insight on the part of the student. Personal pain aside, this type of comment can pose problems when reviewed by department chairs and P&T committees. In particular, committee members may have little to no personal knowledge of candidates for promotion, and student evaluations can be a large part of their impression of the candidate’s teaching, and by extension, their appropriateness for promotion. Comments like the one above do not represent a thoughtful evaluation of teaching quality.

Students should be held accountable for the content of their evaluations, while also having their anonymity protected from those grading them. As students often feel their faculty are harping on them about being professional, it is imperative for upper-class students and faculty to model this behavior and deans to let students know that inappropriate feedback will not be sent to faculty. If deans are going to be discarding unhelpful feedback, faculty and administration need to educate students about what is helpful criticism. Demonstrating to learners the impact of their feedback may empower students to provide more thoughtful feedback to help improve teaching quality, rather than focusing on personal needs, such as pressuring faculty for a better grade. Administration should also seriously consider removing unprofessional comments that lack merit in a faculty member’s P&T file. It might also be helpful to include reviews by other faculty members of a colleague’s teaching for a more complete view of the medical educator’s skill set as part of their P&T files. Further, we propose an independent review of unprofessional comments by someone from the Dean’s office or an Office of Professionalism. A faculty member would face serious professional repercussions if she or he used profanity in a student evaluation or made personally derogatory comments. Why should our students be held to a lower standard?

Evaluations and feedback are an important part of continually improving ourselves as physician educators. Let’s work on improving the quality information gained such that we no longer have to dread reading evaluations and instead can use them for their intended purpose.

Ask a Question
AI Chatbot Tips

Tips for Using STFM's AI Assistant

STFM's AI Assistant is designed to help you find information and answers about Family Medicine education. While it's a powerful tool, getting the best results depends on how you phrase your questions. Here's how to make the most of your interactions:

1. Avoid Ambiguous Language

Be Clear and Specific: Use precise terms and avoid vague words like "it" or "that" without clear references.

Example:
Instead of: "Can you help me with that?"
Try: "Can you help me update our Family Medicine clerkship curriculum?"
Why this is important: Ambiguous language can confuse the AI, leading to irrelevant or unclear responses. Clear references help the chatbot understand exactly what you're asking.

2. Use Specific Terms

Identify the Subject Clearly: Clearly state the subject or area you need information about.

Example:
Instead of: "What resources does STFM provide?"
Try: "I'm a new program coordinator for a Family Medicine clerkship. What STFM resources are available to help me design or update clerkship curricula?"
Why this is better: Providing details about your role ("program coordinator") and your goal ("design or update clerkship curricula") gives the chatbot enough context to offer more targeted information.

3. Don't Assume the AI Knows Everything

Provide Necessary Details:The STFM AI Assistant has been trained on STFM's business and resources. The AI can only use the information you provide or that it has been trained on.

Example:
Instead of: "How can I improve my program?"
Try: "As a program coordinator for a Family Medicine clerkship, what resources does STFM provide to help me improve student engagement and learning outcomes?"
Why this is important: Including relevant details helps the AI understand your specific situation, leading to more accurate and useful responses.

4. Reset if You Change Topics

Clear Chat History When Switching Topics:

If you move to a completely new topic and the chatbot doesn't recognize the change, click the Clear Chat History button and restate your question.
Note: Clearing your chat history removes all previous context from the chatbot's memory.
Why this is important: Resetting ensures the AI does not carry over irrelevant information, which could lead to confusion or inaccurate answers.

5. Provide Enough Context

Include Background Information: The more context you provide, the better the chatbot can understand and respond to your question.

Example:
Instead of: "What are the best practices?"
Try: "In the context of Family Medicine education, what are the best practices for integrating clinical simulations into the curriculum?"
Why this is important: Specific goals, constraints, or preferences allow the AI to tailor its responses to your unique needs.

6. Ask One Question at a Time

Break Down Complex Queries: If you have multiple questions, ask them separately.

Example:
Instead of: "What are the requirements for faculty development, how do I register for conferences, and what grants are available?"
Try: Start with "What are the faculty development requirements for Family Medicine educators?" Then follow up with your other questions after receiving the response.
Why this is important: This approach ensures each question gets full attention and a complete answer.

Examples of Good vs. Bad Prompts

Bad Prompt

"What type of membership is best for me?"

Why it's bad: The AI Chat Assistant has no information about your background or needs.

Good Prompt

"I'm the chair of the Department of Family Medicine at a major university, and I plan to retire next year. I'd like to stay involved with Family Medicine education. What type of membership is best for me?"

Why it's good: The AI Chat Assistant knows your role, your future plans, and your interest in staying involved, enabling it to provide more relevant advice.

Double Check Important Information

While the AI Chat Assistant is a helpful tool, it can still produce inaccurate or incomplete responses. Always verify critical information with reliable sources or colleagues before taking action.

Technical Limitations

The Chat Assistant:

  • Cannot access external websites or open links
  • Cannot process or view images
  • Cannot make changes to STFM systems or process transactions
  • Cannot access real-time information (like your STFM Member Profile information)

STFM AI Assistant
Disclaimer: The STFM Assistant can make mistakes. Check important information.