As part of this dissertation, I used a variety of tools to assess the effectiveness of the course. In order to determine how well I had designed the course, I asked the learners to rate my teaching using a Behaviorally Anchored Review Scale (BARS). With this scale, described below, I asked the learners to evaluate me on specific behaviors described in the form, which gave me some insight into how the learners perceived my practices. I chose the remainder of the assessment tools to determine how the learners experienced the course, what they had learned and how their attitudes and beliefs had been affected by their participation.
The course objectives over the length of the semester were:
By the end of this class, learners will be able to:
Since the objectives for the semester were somewhat open-ended, I decided that using a range of measures would be more responsive to the diversity of the learners and would more accurately reflect the many ways in which they met these goals. Several of the tools were qualitative: analysis of the learners’ journals, my reflective journal and personal observations, the feedback forms that SKSM asks all learners to complete and direct feedback from members of the class. I chose the Sexuality Opinion Survey (Fisher et al., 1988) to gather quantitative data regarding some of the ways in which learners’ attitudes changed in response to the course. Using both qualitative and quantitative assessments with the BARS allowed some of the finer nuances of the learners’ experiences to emerge; this process of triangulation can help overcome the intrinsic limits of individual methods and increase the rigor of the analysis by improving the quality of the data (Patton, 2002).
Each scale on the Behaviorally Anchored Review Scale (BARS) describes specific teacher behaviors along various continua. The teacher’s lesson can be more precisely evaluated since clearly defined actions are less arbitrary to judge than a simple scale labeled “planning” and measured from one to ten. Assessment of well-defined behaviors allows a more accurate comparison of responses since it reduces the likelihood of divergent interpretations of concepts. These sorts of scales have been effective in professional settings to evaluate performance (Barnhart, 2001).
One advantage of using the BARS is that it clarifies the acceptable and ideal standards of adult education theory. The middle section of each scale describes adequate practices; a teacher who is rated in the middle on every range is fulfilling her responsibility to follow good practices. The left side of each scale describes optimal practices, while the right side leaves room for comments regarding ways in which the teacher’s practice could improve. By describing specific behaviors, the BARS reduces individual interpretation of concepts such as “speaking skills,” or “accuracy.”
The BARS is biased in at least two ways. First, the choice of scales that I included makes explicit that I considered certain behaviors more important to measure than others. Each aspect of the teaching model described in Chapter 2- Overview of Adult Education has been adapted into a scale on the BARS in order to evaluate the implementation of the model of adult education that I developed for this research, but I could also have developed other scales. The second bias that the BARS makes explicit is my personal evaluation of adult education theory and what I consider “acceptable” and “optimal” practices. While I believe that I have accurately represented the practices that adult education literature describes as effective, other researchers and educators will have other values and interpretations. This tool is primarily a cross-check on the course design, similar to the more common university-provided learner feedback forms filled out during the last class session. As a way to evaluate whether the principles of my model of adult education have been applied to the class, the BARS has a high face value which reduces the limitation inherent in using a method that is not widely accepted. The BARS form was designed in consultation with Dr. Norma Smith, who served on my doctoral committee.
In addition to the nine scales that reflect the teaching model, there are also three scales that reflect sexuality education in particular: language, accuracy and sex-positivity. I developed these three based on my observations of adult sexuality education classes and workshops, as well as discussions that I have had with adult sexuality educators. The first two of these scales could easily be adapted to almost any body of knowledge since non-biased language and accurate information are not subject-specific. However, given how many of the adult sexuality workshops and classes that I have observed seemed to have been based on the teacher’s opinions or incorrect information, and were often presented through biased language that I believe does not reflect respect for the range of normative sexualities and sexualities associated with well-being, I decided to make my personal expectations explicit. The final scale, sex-positivity, is based on my definition of sex-positivity as affirming, respecting and celebrating sexual diversity. This term is widely used colloquially but has not yet been defined, although its converse, sex-negativity, has been examined to some degree (Rubin, 1992). Dr. John Money is apocryphally attributed with having joked that for most people, “kinky is what I like, perverse is what you like.” Sex-positivity asks us to challenge this notion and value what other people like, even when it’s not what we like ourselves.
Each learner placed a mark on each scale that indicated their assessment of my practices. A few learners indicated ranges instead of single points, and some learners included comments in the space provided. After receiving the individual participants’ BARS, I compiled them on a single form. The numbers on the compiled BARS are the codes assigned to each learner to protect their confidentiality and next to each number is any comment made by that individual. All ratings and comments are reported. The placement of learner ratings allows for evaluation of the course design; clusters of ratings indicate a consistent experience among the learners, while more diffuse ratings indicate a greater diversity of experiences. My goal was to have all of the learner ratings in the “acceptable” range or higher.
The form was given to the learners at the penultimate class session and returned at the final session. Out of twelve participants, ten returned the BARS. One person missed the last class due to illness and the other forgot to return it; they both said that they would mail the form to me, but neither of them did. Overall, the BARS indicate that I was generally successful at meeting my goals for the course design. With twelve scales and ten respondents, there were 120 possible ratings. In four instances, a learner declined to mark the BARS, although in two of these cases, they wrote a comment, and there were two ratings that were not in the “acceptable” or “optimal” ranges, leaving 114 ratings in the ranges that indicated successful teaching practices.
For ten of the scales, there were unambiguous clusters of responses, i.e. the ratings were clearly located near each other. The table shows how many ratings were in each of the ranges for these scales.
Scale |
Optimal |
Acceptable |
Needs Improvement |
Objectives |
7 |
3 |
0 |
Needs Assessment |
6 |
2 |
1 |
Presentation |
7 |
2 |
1 |
Practice |
7 |
2 |
0 |
Interactivity |
7 |
3 |
0 |
Time Management |
6 |
4 (two on border with “optimal” range |
0 |
Speaking Skills |
6 |
4 |
0 |
Language |
8 |
1 |
0 |
Accuracy |
9 |
1 |
0 |
Sex-Positivity |
8 |
2 |
0 |
Some of the comments on these scales offer some further details. For example. Under needs assessment, one learner wrote “Needs arose during and after classes. Needs were considered, discussed and collectively worked with the whole group where appropriate” while another said that I paid “very close attention to what was going on in class and modified as needed.” Meanwhile, a learner who rated this scale in the “needs improvement” range wrote “scope; appropriate classroom discussion.” Unfortunately, I am unable to determine the intent of this message in order to interpret the learner’s meaning.
The two scales that did not have clusters of responses were warm-up and recap. I tended to integrate these phases of the lessons into the opening and the presentation fairly seamlessly, so other than the check-in and check-out that framed the class, the other warm-ups and recaps may not have been apparent. This may be especially true when a session contained two or more cycles of the model, during which the recap of one cycle and the warm-up/needs assessment of the next can blur, or when the needs assessment and warm-up at the beginning of a session were combined. Whether distinct, visible warm-ups and recaps help provide a sense of safety or motivation is a question that might be explored. However, given the depth of safety that was generated by the class, as described below, and that for both of these scales, none of the responses were in the “needs improvement” range, the fact that the ratings did not cluster does not necessarily indicate problematic practices. With this in mind, the BARS overwhelmingly indicates that I met my goal of developing a course that successfully applied the model of adult sexuality education described above.There are at least three ways in which the usefulness of the BARS is limited. First, while it has the advantage of asking the participants to evaluate me based on their observations of the behaviors described on the form, the range of responses on the warm-up and recap scales may indicate that not all of my actions were necessarily observable. This may be especially true if the participants are not trained or experienced as educators or group facilitators. Although I attempted to make the BARS as free from education-specific terminology as possible and to make my practices as transparent as possible, there may be a disconnection between what I was asking on the BARS and what the participants saw me do.
The second limitation is highlighted by what participant 14 wrote on the objectives scale: “I had no idea what they were. Would have been good to do a mid-course review.” However, I reviewed the course objectives on the first day of the class and included them on the syllabus. I also reviewed each day’s objectives at the beginning and listed them on the agenda for that day. In addition, this learner specifically wrote in his final journal entry:
“I rarely read chapter titles or course objectives, wanting to jump right into the meat of the material. So it was only today when I went looking for the course objectives that I found them at the beginning of the course reader.”
The limitation that this highlights is that the BARS is dependent on the participants making observations based on their own biases and using them retrospectively to fill out the form. Even with these limitations, the BARS offers significant evidence that I was successful in applying the model of adult education described above.
While the BARS indicates that I implemented the education model well, it doesn’t address the impact of the course on the learners. Some of the most important data for this dissertation came from the participants’ journals. The value of journals as learning tools is widely accepted among adult educators (Kerka, 1996). Some of the more common uses are to develop and deepen reflective practices (Boud, 2001), explore problem solving skills (Hiemstra, 2001), improve linguistic and writing skills and offer an additional form of learner assessment (Moon, 1999). Moon takes care to point out that “assessment does not always mean marking,” (p. 93) that is, the use of journals to discover what and how people are learning does not necessarily require formal grading. My use of the learners’ journals as an assessment tool during the semester is consistent with this view since they were not graded.
The format for many of the journal assignments was based on Boud’s (2001) model of reflection of previous events. As a way to reexamine prior experiences in light of new information, models, or situations, this three-step process was compatible with the objectives for many of the class sessions. The first step, Return to Experience, asks the learner to mentally revisit a scenario and describe the event as clearly as possible. This allows her to reconnect more deeply with the emotional experiences during the second step, Attending to Feelings. In this stage, having reconnected with the events, the learner can more fully describe the emotions that arose during the experience or in the present moment in response to the journaling. Finally, the act of Reevaluation of Experience allows the learner to relate new information to the past, to seek new relationships between new and old ideas, to determine the authenticity of both old and new ideas, and to integrate new knowledge. This model allowed me to develop journal assignments that would both support the learners’ meeting the course objectives and generate data for my analysis.
Among the issues that can arise when teachers have access to learners’ journals is the degree to which the learners feel influenced by the knowledge that their writing will be read (English, 2001), and in this case, analyzed as research material. This raises both questions of validity, since the learners’ awareness that they are writing for an audience may affect their decisions with respect to topic, tone or content, and questions of ethics, since the teacher will read their reflections and could misuse their knowledge about the learners. These two issues intersect since a learners’ fears regarding the ethics of the teacher could easily influence their journaling and therefore the validity of their writing as a data source. I decided that the ethical concerns could be addressed to a large degree by developing and implementing multiple mechanisms of safety that the learners could control. First, I explained that for those learners participating in the Phase 2 research, I would black out all identifying information in their journals in order to maintain confidentiality. Secondly, I reminded the participants throughout the semester that if they wanted me to exclude a specific submission from the pool of research data, they could do so with a verbal request or by making a note in their journals. Thirdly, all of the learners, whether Phase 2 participants or not, were always given the option to not submit any of the journal assignments. I did strongly suggest that completing the assignment would help support their learning, even if they chose to not give me copies of their work. As another way to maintain confidentiality, the learners were assured that after acceptance of this dissertation by the Union Institute and University, all of their journal submissions would be destroyed. Since I asked them to give me copies of their journals rather than the originals, they would be able to continue to use their journals in any way that they desired, even though I would no longer have access to them. Finally, I told the learners at the beginning of the semester and whenever they had concerns, that my primary goal was for them to get the most out of the class and that gathering research data was secondary to that. I made sure they understood that I preferred they choose to withhold any material from the project rather than compromise their sense of safety.
All of these considerations address the ethical concerns, but they do not directly address the questions of validity. However, I decided that if I could set an overall tone of respect and safety for the class, and if we could develop dynamics of mutual respect, the learners would be more able to authentically use the journals as much as possible. In addition, since the learners always had the option to withhold any portion of their journals without penalty, they had a significant measure of control with respect to this issue. From a research perspective, the issues of validity rest with the participants; all I can do as a researcher and a teacher is address my choices (i.e. the issue of my ethics), trust that the participants’ choices would allow them to use the journals as authentically as possible, and that this would offer me valid research material.
The journal assignments that I received generally demonstrated the learners’ abilities to link the reading and class discussions with their experiences. For example, in the second class session, the strategy of perfectionism as a way to reduce the likelihood of being shamed was described by three learners as one that they had adopted; all three of them specifically wrote that while they had not drawn the connection between shame and perfectionism, they were not startled by it. In one learner’s words, “It was not a surprise to me to read this week that perfectionism is one tactic for avoiding being shamed. It’s my personal favorite!” Another learner described the physical sensations of a shame experience from over twenty years earlier in vivid detail that was consistent with Affect Theory’s description of the somatic response of shame.
Later class sessions that focused on the interpersonal and social manifestations of shame also inspired reflection among the learners. One learner wrote about Affect Theory and the Compass of Shame:
“I find both models extremely useful in helping me determine my own responses to both past and current experiences of shame. It helps tremendously to have some theory to apply to my own responses as well as for use with clients or my future congregants…I think that my primary response to feeling shame about an incident is to withdraw.”
This learner then described her experiences around body-image and the connections she has made between body size and shame. Another learner responded to Kaufman’s Refocusing Exercise in this way:
“I found, as I tried Kaufman’s exercise, that I had a great deal of trouble concentrating on it for more than a few seconds at a time…However, when I tried entering the present moment in which I was experiencing the shame that still resulted from the memory, I found that I was able to come out of myself a little bit. Though there were only brief, fleeting seconds of it, I find myself lighter now. There is less tension in my head and shoulders. I find my part in the event more forgivable, more correctable. I found…that I felt less shy than I normally do.” (emphasis hers)
This entry demonstrates an integration of Kaufman’s exercise, an understanding of the somatic response of shame (Affect Theory), comprehension of her shyness/withdrawal as a response to shame (one of the poles of the Compass of Shame), and an awareness that by proactively responding to shame, the defense of shyness is lessened (the value of listening to shame as a way to respond proactively). My interpretation of this short section of her journal is that it is evidence of her having met the first two course objectives.
After the class had explored somatic awareness practices in Session 7, I asked the learners to focus on their physical aspects of their emotional experiences of shame. One learner wrote:
“When I noticed my reaction, I was glancing downward, and away from her. I took a deep breath, and raised my head, and noticed that it was difficult to look her in the eyes, which is what I usually do when I am having a conversation with someone. I had to actively focus my attention on her, separate out my feelings of embarrassment…”
Not only did this person show a deeper awareness of her shame response, she also described engaging in the Refocusing exercise and being able to reengage in the conversation that had triggered her shame. As with the previous example, I believe that this shows this learner’s having met the first two course objectives.
For the final journal assignment, I reviewed the course objectives and asked the learners:
Do you feel that you have met these goals? If so, what were some of the more important aspects of the class that supported you? If not, how could the class have better helped you meet these objectives? Do you feel that these expectations were relevant to your needs? Were they realistic? Please be as specific as you can be.
Many of the responses offered me a final look at the learners’ successes. For example, one learner wrote that she had learned how “shame is neutral in and of itself and can be helpful or harmful…depending on our response to it. We have the power to decide how we want to respond to shame.” Another person wrote that “the class gave me a framework in which to identify myself…” A third learner told me that
“Though I had known about the physiology of shame, I never connected it with the sequence of events and the consequences of shaming behavior. Reflecting back upon my parenting, I realized some painful moments where I most likely inflicted shame upon my daughters. This is painful to acknowledge. I hope that during my winter visit with them that we can discuss this…and I can work towards making amends with my daughters.”
Another learner specifically described the session on communication and wrote
“In practicing this new language skill, I find that the conversations I have with others are much more powerful, authentic, as well as more meaningful…I have already experienced the power of this communication style with my friends, and with other classmates.”
Other learners took this opportunity to tell me about other tools and strategies that they had learned.
These examples offer a brief glimpse into the ways in which the journals allowed me to assess the learners’ success at meeting the course objectives. However, there were ways in which I could have used the journals more effectively. Not all of the learners submitted journals each week. Possibly, they did not feel drawn to the topic, or to the process; they may have preferred to keep their journal private; perhaps they did not have the time or opportunity to complete the assignment; some of the learners who, for whatever reason, did not meet the class objectives for that week may have chosen to not submit a journal out of concerns for how I would assess or judge them; the learners may not have valued the assignments enough to do them; or writing may not be a learning style that they found useful. These are all issues that frequently arise with journals (Moon, 1999). Since the journal format that I used resulted in participant self-selection each week, I have no way to determine what influenced the learners who did not submit journals. Each week I received as few as four and as many as eleven responses, out of a maximum of twelve participants; the average was approximately 6.5 journals received each week. As a possible teaching practice, I might have asked them all to write something each week, even if only a note stating that they were not submitting a journal. This technique might have inspired some of them to explain or describe what prevented them from journaling, which might have provided further information about their experiences without sacrificing safety.In addition to offering insight into the learners’ experiences, the journals also provided useful feedback regarding my teaching practices. While this was most explicit in response to my asking for it directly, there were other occasions when I was able to use their responses to assess my teaching. Overall, the learners expressed appreciation for my teaching style. One person wrote
“[your] leadership style shows a balance- between humility and empowerment, and between guiding and affirming the class- that creates a safe and stimulating environment for learning and sharing our thoughts.”
Other learners appreciated the integration of the journal assignments into the format of the classes, the use of Affect Theory as a foundation or my use of check-ins to show that I am aware of the learners’ experiences. Some of them also expressed an appreciation for specific readings such as Csikszentmihalyi’s work on flow, both because of its relevance and because it was lighter in tone than some of the other articles.
The feedback assignments at the end of the semester also showed me ways in which the learners felt supported by my teaching. One learner wrote that “[t]he overall arch of the class worked- from the clinical beginning to the presentations.” Another wrote that “[t]he readings were excellent, and class discussions brought them to life.” Other learners wrote that they felt that the class met the objectives, or that they had personally met the objectives of the class. None of the respondents wrote that they did not think that they had met the objectives.
There were also ways in which the journals offered me information that helped me improve my practices, especially the journal topic assigned after Session 4, which specifically asked for their feedback; eleven of the twelve Phase 2 participants responded. Some learners found the readings to be too long and would have preferred a larger number of shorter articles in order to read different perspectives. Another learner suggested that I include a spiritual component to the class; many SKSM classes begin with a candle-lighting and a reading to frame the session, often carried out by learners. Another suggestion was to work more in smaller groups since this learner sometimes felt intimidated by speaking in the larger setting. She also suggested that I spend more time reviewing and summarizing the reading, as did two other learners. I used these last two suggestions to modify the class design during the remainder of the semester.
Some learners had feedback that was specific to certain aspects of the class. One person suggested that I could have spent more time discussing the links between shame and oppression, as well as how to guide people through experiences of shame. Another learner wrote that she would have liked it if we had taken advantage of the “unique atmosphere that was created, and…the level of trust that was achieved in only a semester’s time…” to hear each other speak about personal experiences through small groups or paired exercises.
My interpretation of the feedback that I received through the journals is that while some learners would have preferred a format that was more suited to their individual styles, the overall design was both successful and has room for improvement. The learners who were most specific about how I might improve my teaching were also the ones who felt that they had gained the most out of the class, so perhaps their success helped them feel more safe about offering me feedback. In addition, given that some of the suggestions were in conflict with others (e.g. offering both more lectures and more small group exercises within the time constraints) I believe that the syllabus was well-designed and met my goals as a teacher and my learners’ goals as seminarians and as members of this class. This interpretation is bolstered by the fact that almost all of the learners responded to my direct request for feedback.
Finding the balance between giving the learners control over their self-direction and my need to gather research data was one of the challenges that I knew I would face, and I made my priorities clear: the learners’ safety, as self-defined, was my first consideration. Rather than requiring all of the learners to demonstrate their success through the journals, I offered them multiple opportunities in multiple formats. My observations of the class discussions and learner presentations provided me with another source of information. After each class, I spent between thirty and sixty minutes journaling about my experiences and observations in order to capture them as data.
There were several ways in which SKSM’s culture and group norms supported my ability to assess the learners’ integration of the material. For example, SKSM has a cultural expectation of individual reflection and personal responsibility; rather than making generalizations in order to distance themselves from their experiences, the learners with whom I've communicated have usually spoken from their personal perspective in a deeply heart-centered way. While this has certainly not always been the case, it is much more of an expectation than in many communities outside SKSM. This gave me many excellent opportunities to observe how the learners took the course and made it personally relevant.
Another norm that influenced my observations of the learners is that they are expected to be comfortable advocating for themselves and to take responsibility for stating their needs. Whether they decided that the group’s covenants needed further clarification, or that they wanted to change the room’s physical space, or that they disagreed with a statement that I made, the learners were definitely able to state what they wanted or needed, and were usually able to negotiate in order to find a workable solution. While this sometimes felt threatening to me because it caused me to question my skills as a teacher, it more often allowed me to inquire more deeply in the group discussions and ask more challenging questions because I was aware that they would take much of the responsibility for maintaining their sense of safety and their personal boundaries. This is consistent with much of the theoretical work on self-direction.
In conversations that I have had with some of the learners since the end of the semester, I have been able to determine that these dynamics were affected by my status as an Associate Faculty member. Since my relationships with them were limited in scope, some of them felt a deeper sense of safety in exploration than they might have if I had also been their advisor or was sitting on their Portfolio Committee. The temporal and academic boundaries that my faculty status placed on our relationships created a greater sense of freedom that allowed more risk-taking. Since I was not aware of this specific dynamic until after the semester had ended, I was not able to mindfully work with this aspect of my relationships with the learners in order to develop the depth and openness of our interactions. A more detailed understanding of my position within the larger context of the school as an institution might have helped me work with opportunities that I missed (Tisdell, 2001). However, even without grasping all of the reasons for the level of connection and openness among the learners, I was able to work with it and integrate it into the course.
During the first half of the semester (six out of thirteen sessions), the primary focus of the course was to explore various models and theories that have been created so that we could describe and explain shame in a variety of contexts. In order to assess how well the learners were understanding the mechanisms of shame and crafting new strategies to respond to it within a range of settings, there were several tools that I used within the classroom. The most fundamental was paying attention to whether the learners were using the terms and concepts of the different writers. Each time a learner referred to Affect Theory, the Compass of Shame, the Interpersonal Bridge or the Rules of Shame, I was able to asses how they had understood the reading and how they were exploring its relation to their experiences. Additionally, much of my attention was devoted to the learners who did not use the same terminology, but were describing the concepts. For example, if instead of “interpersonal bridge” a learner talked about “connection,” I strove to ascertain what it was that they meant; frequently, the simple act of asking them for more detail helped me understand their meaning. Whether the learners used the language of the authors or their own, each one of them demonstrated their understanding of the material through the group discussions.
Besides using my observations of the integration of the concepts into the learners’ languages, I also took care to observe how the learners were reexamining past experiences in light of the models we were discussing. The journals, as described above, were one way that I was able to do so, but in order to work with the diversity of learning styles, I decided that including this process into the structure of the class would support the verbal learners just as the journals supported the learners who work well with writing. Although it is hard to quantify, throughout the semester, I observed many clear “a-ha!” moments, as described in the overview of the course above.
While the first half of the semester was primarily focused on exploring theories of shame, the final third (sessions ten through thirteen) was where the learners were able to demonstrate a range of their intellectual and personal growth. In addition to their authenticity and risk-taking, the project presentations were also an excellent way for me to observe how the learners had met the course objectives. Through poetry, stories, sermons, artwork, PowerPoint presentations, song, music, dance or ritual, the learners offered glimpses into how their relationships with sexuality, spirituality and shame had evolved. My original plan was to have the learners talk about how they experienced the projects, both as presenter and witness, during the final class session with the hope that they would be able to talk about what they had learned about the topics and themselves. Even though I was not able to do so as a result of my shifting the agenda in response to the learner’s needs, the presentations served this purpose quite well.
Most of the learners specifically used the language that we had developed in the class as part of their presentations; Csikszentmihalyi’s (1991, 1997) concept of flow became a central theme for a story about healing from incest and abuse, while a sermon on love used imagery that was reminiscent of the interpersonal bridge. Other learners found other ways to work the core concepts of the class into their projects. For example, the dance piece described above included movement that evoked connection/disconnection and eye-contact/separation; afterwards, many members of the class specifically commented on how it reminded them of the shifting nature of love and shame. A learner who offered an autobiographical PowerPoint presentation of his life as a transgendered man used a variety of visual images, both as primary themes and as background for text, that showed different areas of Affect Theory and the relationships between sexuality and spirituality. One learner told her stories as a rape survivor and shared her artwork; while she had completed the painting years earlier, the imagery was clearly aligned with the models of shame that we had been exploring and her narrative shifted fluidly between describing the past and the present. Another learner told her story as a rape survivor four times; each time she re-told it focusing on a different aspect of the course (sex, shame, spirit and power) and showed how the class had changed her image of herself and her story.
Other learners chose topics or formats that were less directly grounded in the content of the course and instead, found other ways to make connections between the course and their projects. For example, a learner who shared her poetry with us talked about how she had never shared any of it with anyone and how she could feel the “blush of shame.” She specifically described the somatic response that she was noticing and how she was responding to it in that moment. A learner who offered us a song that he had composed (accompanied on the piano by another class member) preceded it by talking about finding his voice by working through his shame. Since the slump of shame interferes with the ability to sing, he drew links between overcoming shame and performance. Another learner preceded her sermon by talking about her fears about being a lesbian minister and how that affected her choices about sermon topics. In these, and in other ways, all of the members of the class made their fears and shames visible. Since one of the responses to shame is the desire to hide, these people modeled their growth by authentically overcoming their shames in community.
Although these examples of my observations in the class represent a small sample of the range of the learner’s experiences, they offer an overview of the sorts of behaviors that I witnessed. All of the learners found ways to take the course material, make it relevant to their lives, and fuel the personal transformations that I had been hoping to observe. I believe that this is a clear indication that they succeeded in understanding the concepts of the course. As a result, they all demonstrated their successes at meeting the class objectives. My belief is that the learners’ successes reflect my success at creating the “pedagogical content knowledge” that I had hoped for.
There are at least two limitations to my using my observations to evaluate the class. First, the challenges of managing the flow of a class, lecturing or otherwise presenting information, maintaining safety while supporting risk-taking, and handling the time constraints required much of my attention. I believe that for every instance that I perceived evidence of the learner’s successes, there were others that I missed, and conversely, while I was able to recognize many situations in which needs arose and to address them, I am sure that there were many that I did not see. Secondly, I have a clear bias in that as both a teacher and doctoral researcher, I want to be able to demonstrate my success through the learners’ successes. While I have striven to remain open to the data and describe it honestly, I have no external mechanism for validating my interpretations because there was no independent observer present. However, while my descriptions of my observations might not be sufficient independent evidence, when they are related to the journals and the BARS, a larger pattern of success becomes evident. Thus, while bias is inherent in both my observations and my interpretations, the larger pool of data supports my overarching conclusions.During the final session of the semester, I gave the learners the feedback forms that SKSM uses to evaluate courses. As is standard in university courses, after I left the room, the learners completed the forms and collected them in an envelope. The anonymous forms were then returned directly to the university’s administration, and copies were mailed to me after I had submitted the final grades.
I did not discuss using the feedback forms in my original proposal to the Union Institute and University Institutional Review Board. After receiving and reviewing them, I became aware that they comprised another set of data that would offer further opportunities to assess the semester. I contacted Mr. David Dezern, who serves as the Executive Assistant to the Vice President of Academic Affairs and the Dean of the Faculty and is the primary contact person for all Associate Faculty. I asked him what SKSM’s practices and expectations are with respect to these forms. His response, sent to me via email, was that the learners are aware that faculty receive copies of the forms and that they are used by the Curriculum Committee and are available to the Dean of the Faculty and the President of the school. He also informed me that some faculty use the forms as part of a professional portfolio; for example, people often send selected ones in with resumes, etc. to show how students received the class. Finally, he wrote that “if it is a question of our institutional policy, you are very welcome to use information from the forms for your dissertation.”
There are two sections to the learner feedback forms. The first is a series of questions that are rated from 1 (poor) to 4 (excellent); a rating of 0 indicated “not applicable.” A list of the questions is located in the appendix. The learner rating are summarized in the table:
Question |
Mean |
Standard Deviation |
Median |
Lowest Score |
Highest Score |
1 |
3.8 |
0.43 |
4.0 |
3 |
4 |
2 |
3.7 |
0.47 |
4.0 |
3 |
4 |
3 |
3.9 |
0.27 |
4.0 |
3 |
4 |
4 |
3.8 |
0.60 |
4.0 |
2 |
4 |
5 |
3.7 |
0.46 |
4.0 |
3 |
4 |
6 |
3.7 |
1.07 |
4.0 |
0 |
4 |
7 |
3.7 |
0.61 |
4.0 |
2 |
4 |
8 |
3.9 |
0.27 |
4.0 |
3 |
4 |
9 |
3.3 |
1.14 |
4.0 |
0 |
4 |
10 |
3.6 |
1.09 |
4.0 |
0 |
4 |
11 |
0.9 |
1.75 |
0.0 |
0 |
4 |
12 |
0.7 |
1.49 |
0.0 |
0 |
4 |
13 |
3.4 |
1.45 |
4.0 |
0 |
4 |
14 |
4.0 |
0.00 |
4.0 |
4 |
4 |
15 |
3.5 |
1.13 |
4.0 |
0 |
4 |
16 |
3.7 |
1.07 |
4.0 |
0 |
4 |
Overall, the learners rated the course very highly. The mean scores for almost all of the questions were 3.3 or higher. The two exceptions were for items 11 and 12, which referred to course papers; since there were no papers in this course, most learners gave a rating of 0, although two gave a rating of 4. Similarly, the median score for all of the questions except for these two items was 4. In addition, although it is not reflected in this table, the lowest non-zero score for any of the items was 2, indicating “good.” While forms like this one provide less detailed feedback than the BARS, I believe that this shows that the learners were satisfied with my teaching practices.
The second section of the SKSM feedback form is an open-ended evaluation of the course. This provided more specific information regarding the learners’ experiences. There was more information with respect to my teaching practices than the content of the course. Some learners commented specifically on my organization of the material:
“Progression of material was excellent”
“The professor’s preparation, syllabus and daily agendas were exemplary.”
“useful resources”
“He was …very well organized and well informed on the topics.”
“well prepared as an instructor.”
“The readings were challenging and on-topic in excellent ways.”
“Instructor’s preparation and presentation outstanding and on target always.”
Some of the learners also offered specific feedback on ways in which I could improve this aspect of my practices, especially with regard to my decision to minimize the use of lectures and the reading. Out of the fourteen feedback forms, these were all of the comments suggesting areas for development:
“I would have liked more lecture that specifically taught the course material before discussion.”
“It was hard to keep up with the amount of reading, even though the professor seemed to make allowances for it.”
“Would like longer lectures on the material.”
“Some of the reading was repetitious.”
“I would have liked…some guidance about which readings were most important to look at. Also, I would have liked more written notes (handouts, flip chart, etc.) during lecture and discussion.”
Since the topics of the course were very challenging, I was particularly interested in any feedback regarding my role as facilitator. Some of the learners addressed this specifically:
“[He] was open and well-informed. He created an atmosphere of comfort in which to deal with difficult issues.”
“[He] led this class through an incredible process of sharing, discovery, honesty, and vulnerability.”
“Most helpful- culture of safety and responsiveness to group input by instructor.”
“[He] is a respectful, engaging and caring instructor. I really appreciated the tone and community of the classroom each week.”
“I enjoyed the instructor’s gentle manner and respectful tone. He set the standard for how an instructor ought to conduct a class.”
“He handled conflicts in class with great care, respecting everyone.”
The learners were very also supportive of the relevance of the course to their academic and professional needs:
“great class topic”
“This is a long-needed course, and I believe SKSM should consider offering it again.”
“This material has been pertinent in all areas of life: home, family, work, church and especially self-growth.”
“The concepts and ideas came into my other classes without really trying. That meant for me how much a “core” concept this is for ministry.”
“This discussion of sexuality & shame, and how the two relate to spirit & power is something desperately needed in the church today.”
“This work is needed in the world and in this school.”
“great content- pertinent to ministry”
“This course was very helpful for me in terms of pastoral care issues. I learned a great deal about how shame looks and feels, in myself and in others.”
Finally, several learners made it clear that this course had a significant impact on their lives and suggested that SKSM repeat the class:
“This class empowered all of us!”
“This course should be a requirement for all SKSM M.Div students.”
“This was a graced experience.”
“The course was well worth the effort and should be regularly offered.”
“I highly recommend Charlie Glickman teach this course or another course on related material at Starr King.”
“Offer this course again.”
“This class was extremely meaningful to me…A beautiful experience.”
Both sections of the SKSM learner feedback forms give clear evidence that the course was successful in terms of my practices, learner satisfaction and relevance to their needs. In addition, they also offer a brief glimpse at some of the transformative changes that many of the learners experienced. While the journals and the classroom experiences provide many more details, the depth and grace of the changes that the learners undertook are still evident in their comments.
While the feedback forms were a rich source of data, there are at least two weaknesses to using them in this way. First, the learners were aware that I and the Curriculum Committee would read their comments and some people may not have included their perspectives out of a concern for seeming negative. This is always a possibility with these kinds of forms. Secondly, the forms were administered during the last session of the semester. While this is convenient for the administration and has the advantage of getting feedback while the learners’ experiences are still fresh, there is no way to take advantage of a period of reflection. Asking the learners to fill out the forms again after a few weeks have passed might give them the opportunity for reflection, although it would be a logistical challenge for the administrators and teachers.
In order to measure attitudes towards sexuality, I used Byrne’s Sexual Opinion Survey (SOS) to measure erotophobia as a dimension of personality. The SOS is a 21-question psychometric survey that charts the response to sexual cues along a negative-positive axis. The sexual cues cover a wide range of topics including heterosexuality, homosexuality, autosexuality, visual stimuli, and fantasies (Davis et al., 1998). The SOS is administered using a 7-point Likert scale and provides a range from 0 (most erotophobic) to 126 (most erotophilic).
Research on erotophobia has been well established and the links between erotophobia and education have been explored by a variety of researchers. Erotophobia is correlated with a variety of characteristics that have significant consequences for sexual education. For example, erotophobic people tend to have higher rates of negative reactions to masturbation and homosexuality, sexual dysfunction, greater difficulty in learning and retaining contraception information, more challenges discussing sexual content and inconsistent use of contraception (Fisher et al., 1988). According to one study, health teachers who were more erotophobic rated contraception and other “controversial” topics as less important to include in college classes on sexuality; the score on this test was the best predictor of what health educators actually taught. In fact, “erotophobic teachers were less likely…to teach about birth control, abortion and sexual alternatives to coitus.” ( Yarber & McCabe (1981, 1984) , cited in Fisher et al. (1988) ) In addition to affecting beliefs about sexuality, erotophobia has been shown to correlate with other facets of personality such as generalized value orthodoxy, authoritarianism, the need for achievement, rigid adherence to traditional gender roles, and sex guilt.
Other research has also shown that college students who were high in erotophobia perceived themselves as less vulnerable to sexually transmissible infections (Schmidt, 1991) and that physically abusive men in heterosexual relationships showed more erotophobic attitudes than non-abusive men (Hurlbert & Apt, 1991). Images of breasts in an information brochure on breast self-examination resulted in highly erotophobic women feeling less competent in giving themselves breast self examinations and being “less likely to claim that they did things to improve their health” than erotophobic women who read brochures without pictures; however, low-erotophobic women found the brochures with images easier to understand. (Labranche et al., 1997) Erotophobia also influences education in more subtle ways; art students who measured as more erotophobic were less likely to include details when drawing sexual organs (Przybyla et al., 1988).
At the first session of the class, learners were given the SOS, which was administered again at the end of the semester, with one difference. Response Shift Theory has shown that pre- and post-intervention surveys do not always accurately reflect a true shift in knowledge, since the participants may not have enough information to accurately judge their state before the intervention. For example, if asked “How effective is your communication?” on a 5-point scale, some people may respond with a rating of 4 for both the pre- and post-test, which would indicate no change in communication effectiveness over the course of the intervention. However, if over the course of the intervention, the participants came to realize that their communication at the beginning was not as effective, but they didn’t realize it at the time, this might account for these apparently unchanged scores. In effect, there was a change in how they understood the question as a result of the intervention. In this case, comparing pre- and post-test scores actually compares two different questions that happen to have the same wording.
Response Shift Theory addresses this by administering a post-intervention survey that asks for their current state (post-test), and their current assessment of their state at the beginning of the intervention (then-pre-test). By measuring how the participant self-assesses their change over the course of the intervention, a more accurate triangulation of the effectiveness may be obtained (Mann, 1997; Rockwell & Kohn, 1989; Rohs, 1999, 2002; Rohs et al., 2001). Comparisons between the pre-test, the post-test and the then-pre-test can offer a deeper level of analysis of the effectiveness of the intervention. The second SOS was handed out at the penultimate class session, to be filled out privately and returned in a sealed envelope.
The total scores of the ten participants who returned both surveys were:
Learner # |
1st SOS |
2nd SOS (then-pre-test) |
2nd SOS (post-test) |
1 |
89 |
103 |
103 |
2 |
66 |
84 |
90 |
3 |
105 |
115 |
124 |
8 |
107 |
98 |
98 |
9 |
79 |
69 |
80 |
10 |
101 |
96 |
103 |
11 |
116 |
114 |
113 |
13 |
117 |
118 |
118 |
14 |
79 |
99 |
100 |
15 |
103 |
111 |
111 |
Average |
96.2 |
100.7 |
104.0 |
Median |
102.0 |
101.0 |
103.0 |
Std. Deviation |
16.3 |
14.5 |
12.5 |
Although for some individuals, there were significant differences either between the first test and one of the second tests, or between the then-pre-test and the post-test, the total scores do not show a significant overall change. Analysis of the four factors described in Gilbert & Gamache (1984) also did not show significant correlations. Some of this may be attributed to the fact that the majority of the participants were at the high (most erotophilic) end of the range. The median scores are in the 92 nd percentile and the majority of the scores are in the top quartile (Gilbert & Gamache, 1984). As a result, there was little room for a significant increase in scores. In addition, the sample may have been too small (N=10) for patterns to become evident.
I also analyzed the responses to two groups of questions. This first group represents the “erotophilia subscale”, i.e. questions for which a lower response indicates a higher total score (more erotophilic) and the second group represents the “erotophobia subscale,” i.e. questions for which a lower response indicates a lower total score (more erotophobic), in order to explore whether any patterns were present. This data is summarized in the table:
Erotophilia Subscale |
Erotophobia Subscale |
|||||||
Pre-Test |
Post-Test |
Pre-Test |
Post-Test |
|||||
Item Number |
Average Score |
Average Score |
Shift |
Item Number |
Average Score |
Average Score |
Shift |
|
1 |
2.5 |
2.5 |
0 |
2 |
6.3 |
6.4 |
0.1 |
|
3 |
2.8 |
2.7 |
-0.1 |
5 |
6.8 |
6.9 |
0.1 |
|
4 |
1.2 |
1.2 |
0 |
6 |
6.1 |
6.7 |
0.6 * |
|
7 |
3.5 |
2.5 |
-1 ** |
12 |
5.8 |
6.4 |
0.6 * |
|
8 |
2.4 |
1.3 |
-1.1 ** |
13 |
3.8 |
3.8 |
0 |
|
9 |
3.2 |
2 |
-1.2 ** |
14 |
4 |
4.5 |
0.5 * |
|
10 |
3 |
2.2 |
-0.8 * |
15 |
5.6 |
6.3 |
0.7 * |
|
11 |
2.2 |
1.5 |
-0.7 * |
16 |
6.5 |
6.6 |
0.1 |
|
17 |
3.2 |
3.2 |
0 |
19 |
6.5 |
6.4 |
-0.1 |
|
18 |
1.7 |
1.2 |
-0.5 * |
20 |
5.9 |
5.2 |
-0.7 * |
|
21 |
2.4 |
1.9 |
-0.5 |
Avg. |
5.73 |
5.92 |
0.19 |
|
Avg. |
2.55 |
2.02 |
-0.54 |
|||||
*1> D ≥.5 |
** D ≥1 |
When the average scores for each question are analyzed according to these subscales, two complementary patterns emerge. On the erotophilia subscale, out of eleven questions, there were four questions that the participants had a positive average shift between .5 and 1 in magnitude, and three questions that had a shift of 1 or greater in magnitude. In addition, there were no questions that had a negative average response. (For this subscale, a decrease in score corresponds to an increase in the total score.) On the erotophobia subscale, out of ten questions, there were four questions that had a positive average shift between .5 and 1 in magnitude and one with a negative average shift in this range. Overall, there were nine questions that had no significant change (shifts of either 0 or.1).
As the table shows, there were significant positive shifts for three of the questions, a significant negative shift for one question, eight somewhat significant positive shifts and no significant shift for nine questions. While the overall scores did not vary significantly, there were patterns in how participants responded as shown by the two subscales. This may indicate that they were moving towards a more positive attitude in certain areas or for certain topics. It may also show that while they were not “letting go” of erotophobic attitudes, they may have been taking a first step by moving towards erotophilic attitudes; whether learners later return to their previous mindset or develop a new one that was less erotophobic could be explored in future research.
Unfortunately, the SOS did not provide much data that was useful in assessing the course. In part, this was a limit imposed by the small sample size. In addition, the SOS is a tool to measure attitudes towards specific aspects of sexuality; since the Phase 1 research indicated that UU ministers did not need sexuality education around those topics per se, the course was not “taught to the test.” This was a serious limitation of the research design and was the result of my needing to begin the process of applying to the Union Institute and University Institutional Review Board for approval of the Phase 2 research before the Phase 1 data had been fully analyzed and used to develop the syllabus. Finally, as a psychometric tool, the SOS may not be an effective measure to assess changes in attitudes or beliefs over time. Further research in this area might be helpful for future use of the SOS.