“Nudging” (consisting of text and/or email messages sent to students about tasks that they should perform) and “early alert” systems (including messages to students whose performance is inadequate or at risk of being inadequate) are gaining popularity in higher education. A blog entry by Matt Reed in Inside Higher Ed points out that, if students receive unpleasant messages, such as unpleasant nudges and early alerts, they will stop reading them, and asks how to counter that.
Here I will not address the technological or other practical aspects of how to deliver messages that get read. My purpose is only to review some of the findings from the field of behavioral science that help to inform what should be done to make these messages as effective as possible.
First, Reed is correct about the effect of aversive stimuli such as bad-news texts. We avoid the sources of negative communications (including when one such source is your professor or boss, and such avoidance can cause all kinds of problems). Therefore, in general, when possible, the messages that we give to students should be about increasing what they are doing that is right, rather than decreasing what they are doing that is wrong—what is known as catching someone doing something well.
We also know from behavioral science that, in order to increase a behavior of a particular person over the long term, the feedback should be:
Targeted to the specific behavior that we want to increase (it should not just consist of an amorphous “good job”)
Delivered as close in time as possible to the occurrence of that behavior
Accompanied by the delivery of something (praise, points, food, etc.) that is of value to the particular person emitting the behavior, i.e., it should be accompanied by a reward (and, no, that will not cause problems by ruining the person’s internal reward system, whatever that is)
Not delivered after every instance of the particular behavior, but on an irregular pattern (if every instance is rewarded, then if reward ceases, the behavior will dissipate more quickly than if reward delivery has been irregular)
Some years ago, Hostos Community College of The City University of New York, developed a points system that had many of these attributes. The Hostos administration reasoned that there were many behaviors that they wanted students to engage in for which there were no immediate rewards such as grades, behaviors such as filing the FAFSA or filling out a college survey. Hostos therefore awarded students points for these behaviors and then held lotteries in which students could win real prizes. The more points a student had, the more chances that student had to win. Although I never saw any data about the results, I was told that this reward system worked well.
In conclusion, when nudging or using early alerts, we should try to send messages that are positive, not just negative, and our positive messages should be targeted to specific behaviors, be timely, be rewarding, and be irregularly delivered.
Traditional mathematics remediation has been described as the largest single academic block to students graduating in the United States. Most new college students are assessed as needing it, and the majority of those students never complete it—most of the students who take it do not pass, and many students avoid taking it at all. Without completing assigned mathematics remediation, students usually cannot satisfy many of their college-level required courses, and so cannot graduate.
In 2008, when I became the chief academic officer of The City University of New York system, which includes 10 colleges that offer mathematics remediation (and 9 that do not), mathematics remediation was a big business. At that time, CUNY was spending over $20 million per year on remediation, with the majority being on math remediation. In recent years, I believe that figure has increased to over $30 million.
In 2008, CUNY was mostly delivering mathematics remediation as traditional courses. These were courses covering only remedial material that were taught in a sequence. Students could not take the next course in a sequence until they had passed the prerequisite course. CUNY colleges offered two, and sometimes three, levels of math remediation that a student had to pass or test out of before being allowed to take many college-level courses.
However, research reports on various methods for delivering math remediation were appearing. Some of those reports concluded that being exposed to traditional remedial courses increased a student’s later college success, and others concluded that new methods such as placing students assessed as needing math remediation directly into a college-level course with extra support were more helpful. Some of this research used what are known as quasi-experimental analytical techniques. But it was hard to know what to make of the research as a whole. As an experimental psychologist, it appalled me that there was insufficient evidence guiding us in these huge expenditures of funds about programs that affected the lives of many thousands of students.
Therefore in 2013, along with Mari Watanabe-Rose and later Dan Douglas, I conducted a randomized controlled trial using over 900 students assessed as needing elementary (remedial) algebra. We randomly assigned these students to traditional remediation, that course plus a weekly workshop, or introductory, college-level, credit-bearing statistics with a weekly workshop. Twelve faculty each taught one section of each course type. The results showed that there were significantly higher pass rates in the statistics course, and 2.5 years later, 8% more of the statistics students have graduated than the traditional remedial course students, and fewer of the statistics students have dropped out of college. The students who were assigned to statistics have shown that they did not need to pass elementary algebra to pass statistics, nor did they need to pass elementary algebra in order to satisfy their college-level natural and social sciences requirements.
While we were following the ongoing performance of the participants in our experiments, the research literature on remediation had become large enough, including research conducted with rigorous methodology, that research reviews became possible. Two suchrecent reviews have both concluded that the weight of the evidence now shows that traditional remediation, in general, makes it more difficult for students to advance in their academic careers.
Thus, in contrast to the state of the research in 2008, we can now conclude, with a fair degree of certainty, that placing students into traditional remediation, in most cases, is not the best path for them in terms of helping them to pass their college-level courses. Tying math remediation to college-level courses when that remedial work is specifically needed to understand the college-level course material, streamlining and aligning required quantitative material, all seem to be more effective methods for helping students progress with their college-level requirements.
In my blog post of January 20, 2017, I addressed the issue of whether or not it makes sense for someone to say that a course on subject A is easier or harder than a course on subject B. I concluded that it is not possible to make such a comparison for two qualitatively different courses. There is not a single easiness scale on which we can compare such courses. That blog piece particularly concerned the research reported in Logue, Watanabe-Rose, and Douglas (2016; http://journals.sagepub.com/doi/pdf/10.3102/0162373716649056), which compared the performance of students (all assessed as needing remedial math) who took elementary (remedial) algebra (what we called Group EA) with those who took college-level introductory statistics with a weekly 2-hr workshop (what we called Group Stat-WS).
Although we cannot compare the easiness of two such qualitatively different courses, we can determine if different groups of students receive similar grades taking the same course, an approach employed in one part of our 2016 experiment. We found that, exposed to the same grading criteria, the Stat-WS students did significantly less well than students who had taken the same course but who had been assessed as needing no elementary algebra. Nevertheless, the Stat-WS students did significantly better than the EA students, with the majority of the Stat-WS students passing statistics but the majority of the EA students failing elementary algebra.Students assessed as needing elementary algebra do not need to pass that course in order to pass introductory statistics with an accompanying weekly workshop. Further, our findings did not differ by instructor, by college, or by the race/ethnicity of the students, and there was approximately the same boost in pass rate from being assigned to Stat-WS versus EA for students with the full range of placement test scores.
We also know that it was not necessary for the Stat-WS students to pass elementary algebra in order to pass other college-level natural and social science courses that they took. Our 2016 publication showed that the Stat-WS students, during the year after the experiment was over, continued to earn more credits than the EA students, and the Stat-WS students were as likely to pass natural and social science general education courses as the EA students.
We now have 2.5-year follow-up data on the over 900 students who were randomly assigned in our experiment to EA, Stat-WS, or elementary algebra with a weekly workshop (a total of 36 course sections taught by 12 different faculty at 3 CUNY community colleges, with each of the faculty members teaching one section of each course type; these long-term follow-up data were reported at the 2017 AACC conference, see p. 38 of http://www.aacc.nche.edu/newsevents/Events/convention2/Documents/97thAnnualAACCConventionProgram.pdf ). Of the 907 randomly assigned students (approximately 300 in each group), the data show that, so far, whereas 9% of the EA students have graduated, 17% of the Stat-WS students have graduated (a statistically significant difference), and whereas 52% of the EA students have dropped out of college (i.e., are not enrolled in any college in the United States), only 47% of the Stat-WS students are in that category. Further, once again, our findings do not differ according to the race or ethnicity of the students. Clearly having been assigned to Statistics without first taking Elementary Algebra in no way harmed, and indeed benefitted, the progress of the Stat-WS students with regard to all of their college courses.
Yet before we, based on these results, assign lots of students to college-level statistics with extra support instead of traditional remedial math, perhaps we should consider programs for associate’s-degree students that may raise all students’ graduation rates, programs that are so successful that we need no longer be concerned about the low graduation rates of students assigned to remedial math. For example, we now know, from the most rigorous evidence available for any higher education program, that CUNY’s ASAP program (http://www1.cuny.edu/sites/asap/ ) doubles associate’s-degree graduation rates, even for students who are assessed as needing some remediation. Perhaps if we put all students into ASAP we will no longer need remedial math reform.
However, ASAP’s magnificent success does not mean that we have no work left to do on the problems caused when we assign students to traditional remedial courses. ASAP only enrolls students who have no or slight assessed remedial need, yet there is a 13% difference in the three-year graduation rates between those two groups of ASAP students, and only 48% of the students with slight assessed remedial need graduate within three years—plenty of room for improvement. Just because there are interventions such as ASAP that help address the nonacademic reasons for students dropping out, does not mean that we should ignore the academic reasons, and remedial math is the largest single academic reason for students not completing college. The majority of new students in community colleges are assessed as needing it, and the majority of those students do not complete it.
Colleges may have an incentive to offer many sections of remedial math—such courses are among the most profitable courses because the instructors are among the lowest paid. However, alternative quantitative pathways, such as employed with Group Stat-WS in our experiment, can increase the retention and graduation rates of students and thus maintain or increase valuable enrollments while simultaneously decreasing the cost per graduate.
In the most recent previous entry on this blog (March 13), I described an articulation agreement between a community college and a university in New Jersey, an agreement that seems to be founded on decreasing the competition for transfer students and instead promoting their transfer to one particular bachelor’s-degree institution.
However, there are also attempts in process by others to increase competition for transfer students, which should be to the benefit of those students. An organization named Affordable College has developed an app, as well as other resources, for students seeking to transfer. Bachelor’s-degree colleges pay a fee to participate with Affordable College. Then Affordable College presents information about those colleges to students at community colleges who want to transfer. The community college students can use the information from Affordable College to find out how their credits would transfer to each of the participating bachelor’s-degree colleges. By participating, these colleges make sure that they’re in the mix for obtaining transfer students, and thus for increased enrollment and revenue. In this way, assuming community college students actually use Affordable College, there is an incentive for bachelor’s-degree colleges to pay the fee to participate and to state that they will transfer many credits. The community colleges, in turn, receive a “share of the revenue for each successful transfer,” thus providing them with an incentive to participate.
Such incentives work well in benefitting potential transfer students if those students have several possible destination colleges among which they can choose. But at my institution, The City University of New York, choices are in fact often much more limited. The 12 bachelor’s-degree colleges of CUNY do not all offer the same majors. So, for example, a student can only receive a Bachelor’s in Actuarial Science from Baruch College.
Another constraint on CUNY transfer students is geography. Although the 19 undergraduate colleges of CUNY are all located within the five boroughs of New York City, it can take up to two hours on public transportation to go from one part of the city, and from one CUNY college, to another. For CUNY students, who are more likely than not to be Pell Grant recipients, and who often must live with relatives and work in order to make ends meet, a very long commute to and from class, home, and/or work can be prohibitive.
Then, too, some of CUNY’s bachelor’s-degree colleges are very selective, including for transfer students. Just because a student applies to transfer to a particular CUNY college doesn’t mean that that student will be admitted.
Thus, the choice of a destination college for a CUNY transfer student can be quite limited.
Benefitting transfer students by fostering competition among the destination institutions will only be effective if transfer students truly have choices as to where they will go. Unfortunately for many students this is not the case. Therefore we need additional mechanisms to help such students, including ways to help them transfer as many of their credits as possible.
This past fall an article in Inside Higher Ed described what initially sounds like an articulation agreement that could be wonderful for students: “a new partnership, called 3+1, between [Rowan College at Burlington County, a community college, and Rowan University], which allows students to remain on the community college campus while earning a Rowan University degree. Participating students also get a 15 percent discount and are placed in guided degree pathways from the two-year institution that lead to a bachelor’s degree from the university.”
However, the article goes on to state that this “new program…has prompted the community college to limit any advertisements or promotion for other four-year colleges and universities on its campus. RCBC will not host transfer fairs or information tables for other four-year programs.”
A bachelor’s-degree college or university, such as Rowan, can see itself as giving up something in making one of these agreements because it has less opportunity to refuse to transfer credits when a student transfers in, and thus less opportunity to earn revenue from transfer students. However, such an agreement, particularly if it involves essentially eliminating the marketing of Rowan’s competitors, can give Rowan a leg up in obtaining transfer students as compared to other competing bachelor’s-degree institutions. Giving up some credits may be worth it if, as a result, you get more students.
But what happens to students who, for reasons such as geographical constraints or subject matter interest (e.g., Rowan does not have majors in Anthropology or Architecture), don’t want to transfer to Rowan after completing their associate’s degrees? It appears that the information that these students will have about other options will be limited, and they will have to do more to find their way in the transfer maze. Perhaps there are few RCBC graduates who would prefer a bachelor’s-degree institution other than Rowan. In that case, perhaps RCBC is doing the best thing that it can for the majority of its students in making this agreement, which could significantly help those RCBC graduates who want to attend Rowan to attain bachelor’s degrees.
Even so, it is unfortunate that what is good for each and every student is not the only criterion shaping these policies—that, due to existing incentive structures, self-protection and self-interest inevitably come to play in interactions between independently operating institutions, as they do in other areas of academe.
The evidence indicates that the more credits a college student has accumulated, the more likely that student is to graduate.
There are many reasons that this might be the case. One is that the more credits someone has, the shorter the delay to the reward of graduation, which increases the student’s motivation to do the remaining work needed to graduate.
Another is that the more credits someone has accumulated, and therefore the less time there is to graduation, the fewer the opportunities there are for something to occur in the student’s life that will interfere with graduation.
Still another possibility is that students who have accumulated many credits are more likely to have taken and passed more credits each semester than is the case for other students, and so are also more likely than other students to take and pass more credits per semester in the future. Such habits help students to complete their degrees.
Accumulating many credits has been described as constituting “academic momentum,” whereby having accumulated credits propels students to completion.
Whatever the reason, it is clear that accumulating more credits increases the probability that a student will complete his or her degree, and so a good bet for helping students to complete consists of helping them to accumulate additional credits, including at a higher rate.
This conclusion also means that, if you want to compare the relative graduation rates of different groups of students that are exposed to different interventions, you need to make sure that, at the start of the interventions, the groups are matched in terms of the numbers of credits that they have already accumulated.
For example, suppose you want to compare the relative graduation rates of students who start at a college as freshmen (what we can call native students) with students who transfer into that college as juniors. At entry to the college, the transfer students will likely already have accumulated one-fourth to one-half of the total credits that they need to graduate, but the freshmen will have started at the college with zero credits. To do an apples-to-apples comparison, the transfers need to be compared to students who started at the college as freshmen, but who have already accumulated, on average, the same number of credits as the transfers. When such a comparison is made, transfers are less likely to graduate than are native students, with a common reason being loss of credits on transfer.
The fact that probability of graduation increases with the number of accumulated credits has implications both for how we help students graduate and for how we investigate what other factors affect their graduation.
Our research shows that when nonSTEM majors, assessed as needing math remediation (elementary algebra), are randomly assigned to college-level statistics (with extra support) they are more likely to pass and to continue to accumulate more college-level credits afterwards than similar students randomly assigned to traditional remedial elementary algebra. However, adoption of this alternative to math remediation has been slow at The City University of New York, where the research was conducted.
A few CUNY math faculty have questioned our results, saying that the higher pass rate in statistics was due to statistics being easier to learn than is elementary algebra. Or that the higher statistics pass rate was due to statistics requiring only a 60% score to pass, whereas elementary algebra requires 74%.
Let us leave aside for the moment the fact that, in our experiment, students randomly assigned to statistics with extra support accumulated more credits in college afterwards than did students randomly assigned to elementary algebra, indicating that elementary algebra was not as necessary a prerequisite for students to satisfy their college-level course requirements as some have claimed.
For now, let us just consider the validity of the statement that statistics is easier to learn than elementary algebra, or the logic of comparing the percentage needed to pass statistics vs. elementary algebra.
First consider that, in grading any course, the percentage of correct answers that any student gets is completely arbitrary, because the percentage of correct answers is a function of the difficulty of the quizzes, exams, homework assignments, etc. A faculty member can make those tasks really hard, so that even good students get few questions correct, or really easy, so that most students get everything correct. With elementary algebra, there are CUNY-wide standards and tests, so you can be pretty sure that, if 20% of students consistently pass in one faculty member’s class and 60% in another, the latter faculty member is actually teaching better than the former, at least if the students in the two faculty members’ courses are similar at the beginning of the semester. But if there isn’t a CUNY-wide syllabus, final exam, and grading rubric (and, in fact, none of these exist for Statistics), you can’t tell which faculty member is teaching the material better without obtaining much more information.
Now consider the fact that statistics and elementary algebra are qualitatively different courses, which means that, by definition, they can’t ever have the same syllabus, final exam, and grading rubric. Which means that the percentage passing, or the percentage you define as passing, simply can’t be directly compared across algebra and statistics. Which means that there is no way to say that one is easier than the other.
Suppose, in any sample of 100 students who took both statistics and elementary algebra, 60 scored at least 80% correct in statistics and 40 scored at least 80% correct in elementary algebra. Does that mean that the statistics course is easier for the students than the elementary algebra course? Perhaps in the simple sense of the students getting better grades in statistics. But inherently easier? No. The faculty teaching statistics could simply make their exams much harder, and the number of students obtaining 80% correct in statisticss would plummet, and then an observer might say that the statistics course is the harder one.
A more useful question for our research was whether the faculty in our experiment graded statistics according to the standards by which they usually graded statisticss. In our published paper we list nine pieces of evidence that are consistent with the hypothesis that the faculty in our experiment graded statisticss as it is usually graded. We can therefore reasonably conclude that many students, though assessed as needing remedial elementary algebra, can nevertheless pass college-level statistics, taught as it usually is except with some extra support (a weekly 2-hr workshop), as well as passing other subsequent college-level courses. Students are more likely to pass college-level statistics (taught as usual except with extra support) than remedial elementary algebra (taught as usual).
Of course, just because a student can pass college-level courses without having first passed elementary algebra doesn’t mean that no student should have to take elementary algebra, or higher-level algebra courses. A college or university could decide that it is important for every graduate of that college to demonstrate knowledge of at least elementary algebra, or that students majoring in certain disciplines need to do so. But such statements are different than saying that every college student needs elementary algebra in order to be able to pass required college-level courses, a statement that our research does not support.
At CUNY there are some faculty who believe that every CUNY graduate should demonstrate knowledge of algebra, at least elementary algebra. However, the CUNY-wide general education requirements do not currently require that every student know algebra. Passing college algebra is sufficient for passing the mathematical and quantitative reasoning general education requirement, but it is not necessary. Passing statistics can also satisfy this requirement, as can passing a quantitative reasoning course. And given that we now know that passing elementary algebra isn’t necessary in order to pass statistics (taught with extra support), it follows that there is no current requirement for all CUNY students to demonstrate knowledge of algebra.
Assuming that CUNY’s general education requirements do not change, all CUNY students who do not require algebra for their majors, and who have been assessed as needing math remediation, should have the opportunity to take statistics, or another quantitative alternative course, with extra support, instead of traditional elementary algebra.
Here is the Youtube video made by the American Educational Research Association (AERA) of me talking about the experiment concerning math remediation that we just published in the journal Educational Evaluation and Policy Analysis: