hope college provost's office    
hope college > academic departments > provost's office        

 
Resources for Faculty <
Resources for Chairs/ Directors <
Office Staff <
Open Faculty Positions <
Faculty Handbook
<
Dept Chair Listing <
Human Resources <
 

Answers to Frequently Asked Questions about SALT

Who will see my SALT results?

Your individual results on SALT will be reported only to you. Data from SALT will be reported publicly within Hope College in relevant aggregates (for example, for introductory lab science courses or for First Year Seminar course). These aggregate data will be used for departmental, program-specific and college-wide conversations aimed at improving student learning at Hope College.

Will my students be asked to respond to SALT items that are not relevant to my specific course?

Before the SALT is administered for each of your courses, each faculty member will be asked to designate each of the items listed in the SALT curricular goal questions as a primary objective, a secondary objective or not an objective of that particular course. The answers to these questions will be used to generate comparative data for cohorts of courses that share the same primary and secondary objectives, so that you can learn from the results. See the AcAB minutes of September 16, 2008, item 4.B.1. Students will be told, as part of the instructions during the administration of SALT, that not all items are relevant to all courses and that irrelevant items should be marked N/A.

Why do faculty who are administering SIR IIs have to do SALT as well?

SALT is a course assessment that asks about Hope's specific stated skills and habits of learning.  SIR II, a nationally-normed instrument about general teaching behaviors, will continue to be used for teaching evaluation for those anticipating tenure and promotion decisions.  We need faculty who are administering SIRs to also administer SALT because we need good baseline data on the course assessment data and so need to aim at gathering data from all courses.  Students who are doing both the SIRs and SALT will be instructed to skip answering the teaching assessment questions on SALT.  SALT contains teaching assessment questions so that faculty who are not using the SIRs can receive information for self-improvement about student perceptions of their teaching.  Faculty using the SIRs will receive data from their SIRs about teaching behaviors and will get their individual SALT results about their course's contributions to Hope's skills and habits of learning.  Their SALT results will also become a valuable part of Hope College's aggregate data.

On some items my percentile score is lower than 50% even though my average is above the Hope College average. How can this be?

In some cases, an individual's course average is above the Hope average yet they have a percentile score that is lower than 50%. This is not an error, but a mathematical phenomena that reflects the relationship between averages (mean scores) and percentiles (where 50 is the median, not the average). The short answer to this question is: Course averages are affected more strongly by extreme scores (high or low) than are percentiles. If the average varies from the 50th percentile to a significant degree, there must be a few students with relatively extreme ratings pulling the average up or down accordingly.

Here is a longer answer, with an example: The average or mean of a set of values is the sum of all of the values divided by the number of values. Many people incorrectly interpret this to be a value that most of the values are relatively close to. This is an incorrect interpretation, however. When some of the values (commonly called outliers) in the set differ greatly from the rest, the mean can be "skewed" by those outliers. For example, if we have 5 data values, 100, 90, 85, 80 and 0, then the mean is (100+90+85+80+0)/5, which is 71. In this case, people might be surprised if you told them that 4 of the 5 values were 80 or above, but the mean was only 71.

The median of a set of values is a value which separates the data values in halves. In other words, there will be an equal number of values above the median and below the median. In the data set above, the median is 85, since there are two values above 85 and two below it. The median corresponds to a percentile of 50; a percentile of 40 means that a given value is above 40% of the data values, and below 60% of those values.

When computing the median, the low values aren't really treated any differently than if they had been just a tiny bit below the median, since median just finds the point where 50% of the values are above and 50% are below.

In the SALT data, if there are outliers, they tend to be values much lower than the others (rarely do most students give a very low rating on a question and then have a few students rate it much higher). This means that if there are a reasonable number of outliers, we may see a mean score which is less than the median. Our above example illustrates this - consider the score of 80. That score is above the mean of 71, but below the median of 85. If you have further questions about this, ask your favorite mathematician.

If SALT is an assessment instrument, why does it ask students about their views regarding the quality of teaching? Doesn't that make SALT a hybrid of assessment and teaching evaluation? Shouldn't assessment and evaluation be done using separate instruments?

Assessment is data-gathering aimed at improvement, in this case of teaching and of learning. Evaluation is data-gathering aimed at performance review within a structure of supervision and such incentives as raises, promotion and tenure. SALT asks about student perceptions of the quality of teaching in their courses because (1) teaching and learning are integral to one another and (2) teaching is more likely to continuously improve when faculty know how students are perceiving their teaching. To preserve the privacy of the information, individual SALT results will be sent only to the individual faculty member; aggregate information, which will mask individuals' results, will be made available more widely. Chairs and colleagues may be helpful mentors in improving teaching. If a faculty member were to decide to share his or her results with a department chair of other faculty colleague for mentoring purposes, SALT would remain an assessment tool.

How is SALT different from the old HCTA?

SALT is a student assessment of the student’s perception of learning, and the teaching that contributed to it. Many SALT questions focus on the overarching objectives of Hope’s liberal arts education. While the HCTA asked mainly about good teaching behaviors, SALT asks about how the course contributed to growth in Hope’s stated skills and habits of learning. HCTA was made available to students at the instructor’s discretion. SALT will be available to all students in all courses, unless the instructor’s dean agrees that a particular case should be an exception. For the full text of SALT and record of Academic Affairs Board actions see the AcAB minutes of Oct. 21, 2008.