hope college site    
hope college > assessment         

 
Committee <

NSSE: National Survey of Student Engagement

<

GPI: Global Perspectives Inventory

<

Learning with Technology

<

Online Learning Report

<

Experience with Diverse Perspectives

<

SALT

<

NSSE 5-Year Trend Data

<
HEDS: Higher Education
Data Sharing Consortium (coming soon)
<

Department Assessment Resources

<
 

SALT

Full Guidelines for Reviewing and Interpreting SALT Results (General and Specific to Course Assessment)

Students’ responses to SALT represent their perceptions of our course and of our teaching. Their perceptions can provide useful information to inform and potentially improve our teaching, but they are certainly not the only source. We can improve our teaching based on students’ performance on tests, papers, and other assignments; our own sense of what is or is not working in our courses; conversations with colleagues; and readings and workshops related to teaching. Decisions about the effectiveness of our teaching are best informed by converging evidence from these different sources. These guidelines are intended to suggest ways to review and interpret SALT results that may be helpful for you. They are meant to describe what you can do with your SALT results and not what you should do with them.

There are three sections to these guidelines. The first section includes General Guidelines that describe a series of steps for reviewing and interpreting your SALT results. These guidelines can be used for both the Course Assessment and Teaching Assessment sections of SALT. You need not include all the steps in your review nor follow the steps in the sequence described in the guidelines. The second section provides a description of why we need to be cautious when we interpret comparisons of our individual average to the Hope Average. The third section describes guidelines that are specific to the Course Assessment section. If you are interested in discussing your SALT results with someone confidentially, you can go to SALT faculty consultants to access a list of colleagues who are familiar with these guidelines and who are eager to work with you.

General Guidelines

Step 1: Review SALT Results in Light of Your Instructional Priorities

  • Before looking at your SALT results take some time to think about what you are trying to accomplish in your course and reflect on how you think the course has gone this semester.

  • Based on these reflections identify 2 or 3 aspects of the course that you want to focus on or that are especially important to you for this course and that are reflected in SALT, e.g., the “write effectively” item from the Course Assessment section or the “provided helpful feedback on assigned work” item from the Teaching Assessment section.

  • Skim students’ written comments to get a sense of their responses for the SALT items that you have identified.

  • Review frequency distributions for these SALT items to get a sense of all the students’ responses. [You may want to compute the percentage of students who respond “A Great Deal” or “Quite a Lot” on the Course Assessment or “Strongly Agree” or “Agree” on the Teaching Assessment.] If you have taught more than one section of the course you could combine the frequency distributions across sections.

  • Compare the frequency distribution and written comments. When a student expresses a specific concern, you can check the frequency distribution to see whether the student’s written response is articulating a concern shared by other students or whether the student has a genuine concern that is not consistent with the other students’ responses in the frequency distribution. In general, it is more helpful to view students’ written responses and their quantitative responses as complementary rather than as two independent and separate sources of information.

  • Reflect and take notes on what you have learned about the SALT items you have identified. Describe how satisfied you are with each of these aspects of your course.

Step 2: Overall Review of SALT Results

  • Read over the students’ written comments for all the items you did not consider in Step 1.

  • Review the frequency distributions for these items and consider ways in which they are consistent or inconsistent with students’ written comments.
  • Identify the SALT items for which you think you are “on track.”

  • Identify the SALT items that reflect aspects of your course that you think you could do more effectively.

Step 3: Make Instructional Changes and Assess Their Effectiveness

  • Assessing instructional changes involves a cycle: (1) check SALT results to identify what you want to improve; (2) implement the change in instruction to make the improvement; (3) check SALT results for the semester after you have made the change.

  • Identify assignments or instructional techniques that are linked with the SALT items for which you want to assess an instructional change.

  • Consider how you could modify the assignment (technique).

  • Make the change when you next teach the course.

  • Assess the change by comparing your average rating for the previous semester to your average rating for the semester in which you made the change. The cycle of making a change and assessing its effectiveness is likely to be repeated as you refine the instructional change across a few semesters.

  • The most informative comparisons (for improvement) are between your averages across semesters rather than your average compared to the Hope Average. A description of why we should be careful in making comparisons using the Hope Average is included in the second section of the guidelines.

  • Example: Using SALT Ratings for Assessment

    • Assignment Related to Writing Objective – disappointed in how students were writing rough drafts of research report.

    • Modification of Assignment – submit rough draft to instructor and to student partner (as had been done in the past); students were told that the rough draft would be assigned up to 2 points on a 50-point assignment but no specific feedback would be given on the rough draft.

    • Made Change – Spring 2010.

    • Assess Effectiveness of Change – compare average rating for Writing Objective for Fall 09 (3.93) to average rating for Spring 10 (4.14); small increase but in the right direction.

    • Continue cycle in Fall 2010 – emphasize rough draft even more by increasing points assigned to rough draft and providing feedback on rough drafts.

  • Example: Using Measures Other than SALT Ratings for Assessment.

    • Assess Assignments Related to “Weigh Evidence” Objective – class exercises; challenge questions from text; test performance.

    • SALT item for this objective is too general and not likely to be a helpful measure for effectiveness of these specific assignments.

    • Goal of this assessment is to determine how much students perceive that each of these assignments is contributing to achieving the “Weigh Evidence” objective.

    • Add questions to SALT that are directly related to each assignment. Laurie Van Ark at the Frost Center can help you add your questions to SALT. For example, “Degree to which the class exercises helped me to weigh evidence” (A great deal; Quite a bit; Somewhat; A little bit; Not at all; No Response). These questions could also be administered in class or on Moodle.

    • Follow the cycle to determine if each assignment is contributing to the “Weigh Evidence” objective; make modifications in assignments that are less effective than you want them to be; assess effectiveness of change by comparing average change for each question across semesters.

Step 4: Overall Assessment of Value of the Course, Work Load, Overall Average Hours Worked, and Overall Teaching Effectiveness

  • Review the frequency distributions for these four dimensions.

  • Compare your average to the Hope Average to provide a rough idea of students’ perceptions for these dimensions relative to other courses. [Please see the following of why we should be cautious in interpreting comparisons to the Hope Average.]

  • Decide whether you want to make instructional changes based on students’ perceptions on any of these dimensions. If so, you can follow the cycle for making instructional changes and assessing their effectiveness.

  • The goal of these efforts is to communicate with students more effectively about these important dimensions of your course.


Be Cautious when Comparing Your Averages to the Hope Average

Comparisons to an overall average can provide useful information. Health professionals use this type of comparison to determine if an individual is overweight or if their cholesterol level is too high. Holland was identified as a “happier than average” community based on a comparison to a national average. The Hope Average also provides a comparative value for interpreting your averages for the items on SALT. An instructor whose averages on the SALT items are above the Hope Average may feel “happier” than the average instructor. You need to be cautious, however, when you make these comparisons, especially when your average for an item is below the Hope Average and your class size is small (< 30).

The Hope Average is based on literally thousands of students’ ratings but most students respond similarly and so a large number of students report ratings near to the Hope Average. The end result of this lack of variability in students’ responses is that a small difference between your average and the Hope Average can make a big difference. For example, being at the 60th percentile rather than at the 30th percentile suggests a substantial difference, but those two percentiles may reflect only a relatively small difference between your average and the Hope Average. For example, the Hope Average is 3.36 for “Speak effectively,” but an average of 2.95 is the 30th percentile and a 3.68 is the 60th percentile. Be attentive to the absolute size of the difference between your average and the Hope Average when you make that comparison.

Comparing the averages for the SALT items to the Hope Average is also affected by the relatively small samples on which the averages are based. The enrollments in many of our classes are around 30 or fewer students. With such small samples the estimate of the average can be affected by the ratings of only a few students. The impact of an individual student’s rating differs depending on whether the student assigns a relatively low rating or a relatively high rating for an item.

The potential impact of individual ratings on the average for a SALT item can be illustrated with the results from a course that one of our colleagues taught in Spring 2010 with 18 students. The data are for the SALT item, “Presented material in a clear and organized manner.”

  • The frequency distribution was: SA(14); A(3); Neutral(1); D(1); SD(0)

  • The average for this item was 4.58 [Hope Average = 4.08]

  • What happens when you drop lower ratings from the distribution?
     
    • When the rating for the one student with a Disagree response was dropped, the average changed from 4.58 to 4.72.

    • When the ratings for the one student with a Disagree response and the one with a Neutral response were dropped, the average changed from 4.58 to 4.82.

  • What happens when you drop higher ratings from the distribution?

    • When the rating for one of the students with a Strongly Agree response was dropped, the average changed from 4.58 to 4.56

    • When the ratings for two of the students with Strongly Agree responses were dropped, the average changed from 4.58 to 4.53

Comparisons of your individual average to the Hope Average are most informative when the estimate of your individual average is stable. As the previous example illustrates, individual ratings at the lower end of the distribution can have a fairly large impact on the average for a SALT item. This potential impact indicates that the average may not be as stable as it could be. The possibility that the average for a SALT item may not be stable is one good reason to be cautious when comparing your average to the Hope Average.  When making comparisons to the Hope Average it is important to look carefully at the frequency distribution for the SALT item before interpreting any difference between your average and the Hope Average. This is especially important with smaller enrollments (e.g., < 30).

The importance of looking carefully at the frequency distribution is illustrated by another example from two sections of the same course that one of our colleagues taught in Spring 2010. The data are for the SALT item “"Understand as I read/listen/view.” This was a primary objective for the course (Hope Average = 3.96).

The frequency distributions for the two classes were:

Section 1: Great Deal (2), Quite a bit (7), Somewhat (10), Little bit (1), Not at all (0)
Section 2: Great Deal (4), Quite a bit (10), Somewhat (3), Little bit (1), Not at all (0)

    • The two frequency distributions are similar with ratings of “Quite a bit” or “Somewhat” for 17/20 students in Section 1 and 13/18 in Section 2.

    • The average rating for Section 1 was 3.5 (21st percentile) and the average rating for Section 2 was 4.0 (50th percentile).

    • The absolute size of the difference between the averages for the two sections is only .5.

    • The average of 3.5 for Section 1 corresponds to a rating between “Quite a bit” and “Somewhat” suggesting the objective is being met reasonably well in the course, but the 21st percentile might be taken to mean that there is a serious problem with meeting the objective in this course.

The take-away message from these two examples is that you need to look carefully at the frequency distribution before interpreting a comparison of your average to the Hope Average (or a percentile).


Guidelines Specific to Course Assessment

Comparison of Objectives for Skills and Habits of Learning

  • There is an issue that needs to be considered for the Course Assessment portion of SALT. Students’ ratings are somewhat different for the Skill Objectives [mathematics, technology & library, writing, and speaking] from those for the Habits Objectives [weigh evidence, understand as I read/listen/view, understand cultural development, creative and innovative, curious and open, intellectual courage, integrity/compassion/faith]. The table below shows the average ratings for each objective for the primary, secondary and not-an-objective categories for Spring 2010.

    Objective Primary Secondary Not-an-Objective Primary-Not
    Logic/Evidence
    3.98
    3.71
    3.46
    .52
    Mathematics
    4.32
    3.20
    1.87
    2.45*
    Understanding
    3.96
    3.80
    3.73
    .23
    Technology
    4.07
    3.34
    2.67
    1.40*
    Writing
    4.16
    3.59
    2.48
    1.68*
    Speaking
    4.06
    3.51
    2.76
    1.30*
    Cultural Dev
    4.16
    3.69
    3.12
    1.04
    Creativity
    4.14
    3.57
    3.28
    .86
    Curiosity
    4.05
    3.80
    3.64
    .41
    Courage
    4.11
    3.90
    3.88
    .23
    Integrity
    4.22
    3.93
    3.70
    .52

  • In general, students’ ratings should be higher for primary objectives than for the not-an-objective category. All the objectives show the expected difference. The last column of the table shows the average difference between the primary objective category and the not-an-objective category. The differences for the Skills objectives are identified with an asterisk. The differences for the Skills objectives are larger than all of the differences for the Habits objectives. One of the reasons for the larger differences is that the averages for the not-an-objective category for the Skill objectives are much lower (between the ratings of “a little bit” and “somewhat”) than the averages for the Habits objectives (between the ratings of “somewhat” and “quite a bit”). It seems that students can tell more clearly when a Skills objective has not been covered in a course.

  • The data for both the Skills objectives and the Habits objectives do provide useful information for assessing students’ perceptions of our courses.

  • If you want to assess the effects of instructional changes you have made, however, it will be easier to do that with the Skills objectives. 

Integrating SALT Objectives into Your Course and Integrating Your Course Objectives into SALT

  • The SALT objectives are based on the objectives of the General Education Curriculum. Instructors are likely to have objectives for their courses that go beyond the SALT objectives. There are at least two ways to integrate these two different types of objectives.

  • One way is to integrate the SALT objectives into your syllabus to indicate to students how the assignments in your course connect to the SALT objectives. The following table illustrates how one instructor integrated the SALT objectives into the course.

    Objective Where practiced in Social Psychology:
    Critical thinking Evaluating studies & theories (TAs, lab paper, exams)
    Mathematical thinking Explaining results of your lab, understanding results of other studies
    Critical reading with sensitivity Lab design & conclusions, evaluating theories, Paper 1
    Written Communication Exams, TAs, Paper 1, Lab Paper
    Oral Communication Lab group, class, lab presentation
    Analytic & synthetic thinking Seeing themes, developing hypotheses, TAs, Paper 1
    Creativity Designing your lab, lab presentation, TA & test answers requiring new examples, Paper 1
    Curiosity and openness to new ideas Designing your lab, Paper 1, evaluating theories, class discussions, IAT
    Intellectual courage and honesty Talking in class, TA answers, Paper 1
    Moral & spiritual discernment & responsibility Evaluating one's own values, biases (IAT, TAs)

  • You can also integrate your own course objectives with the SALT objectives by adding new questions assessing your own objectives to the SALT form.  Laurie Van Ark at the Frost Center can help you add your questions to SALT. These questions could also be administered in class or on Moodle.