Blog

Administrators Should Be Responsible for Information

In my forthcoming book, Pathways to Reform: Credits and Conflict at the City University of New York, as well as in my recent piece in The Evolllution, I write about the many reasons that, despite good evidence for change in higher education, change does not occur.  My new book discusses this topic within the context of facilitating credit transfer between colleges.  The Evolllution piece uses the example of math remediation reform.  Although there has been rigorous evidence supporting math remediation reform, any resulting changes, as well as reforms designed to facilitate credit transfer, have been slow.  Why?

As described in both my new book and the Evolllution piece, the usual explanation given is faculty. I give some of the many reasons that the faculty may be slow to adopt change.  But the faculty aren’t the only reason that change doesn’t get made at colleges and universities.  Again in both publications, I discuss the many reasons that administrators also may not support change.  Further, I make the point that, if there is clear evidence for change, and the faculty will not initiate that change, then it is up to the administrators to effect that change.

But before things reach that point, it is first the responsibility of administrators to make sure that everyone, including the faculty, who would be involved in a change, knows about and understands the evidence supporting the change.  If administrators do not themselves have the expertise or the time to understand and explain the relevant evidence, then the administrators should find one or more people who do.

Throughout my academic career I have been repeatedly appalled at the inadequate transmission of information from the administration to the faculty.  This began when I was a new full professor (and an associate dean) and heard my department chair—incorrectly—tell the department’s faculty that they were teaching more students per faculty member than were any of the other departments.  The department’s faculty were then, understandably, loath to make the dean’s requested changes in the department’s teaching output.

Many years later, when I worked in the CUNY system central office, I learned that one of the 24 CUNY college presidents was allowing only his deans and vice presidents to have access to their college’s performance data (compiled by the CUNY central office), encouraging the faculty to think that their college’s retention and graduation rates were higher than they actually were and discouraging those faculty from believing that any change was necessary.  I subsequently instituted a policy making all of these data publicly available.  Then at least when we discussed what did and did not need changing, everyone was basing their arguments on the same evidence.

A few months ago, I attended a conference whose attendees consisted mostly of administrators.  In one session about remediation, a speaker first asked everyone in the small audience to say where they were from.  Two people sitting together said they were both deans in the Florida public college system.  The speaker then said, how interesting, I’ve been wondering what are the results of the recent remediation changes in Florida (2013 legislation has given students, assessed as needing remediation, the choice as to whether to take traditional remedial courses or not).  The two deans immediately said, oh we don’t know, it’s too soon to have any results yet.  However, despite being in New York, I knew that, in the two years prior to the conference, there had been several publications on the results of the change implemented by the Florida system.  So I raised my hand and told everyone some of the results (the percentage of students passing certain college-level gateway courses has gone down, but because more students were now getting to take those courses, more total students are passing those courses).

I don’t know why the two Florida deans, supposedly interested in and/or involved in remediation or they wouldn’t have attended the session in the first place, did not know about these publications. But what I do know is that if deans working in this area don’t know critical information, then it is likely that their faculty won’t know either, and the deans can’t be helpful in transmitting the information.  In the meantime, the Deans’ faculty may be suffering with lowered pass rates, worrying those lower rates will hurt their reputations as teachers and miserable at having to fail more students.  Any encouragement these faculty might have received from knowing that, nevertheless, more students are being enabled to stay in and graduate from college, is not available.

For all of this I hold administrators responsible.  Good choices depend on having good information, and administrators have both the opportunity and the resources, and thus the responsibility, to obtain and distribute good information to all appropriate personnel.  Only with that information will faculty, administrators, and other staff be able to fulfill their obligations to their institutions and to their students.

Subscribe  

Resources on Undergraduate Student Transfer

In connection with my new book, Pathways to Reform:  Credits and Conflict at The City University of New York, I have added many resources to my website concerning transfer.  Suggestions regarding changes in any of these materials are most welcome. My hope is that these resources will provide assistance to all those interested in improving student transfer.

Clicking on my website’s home page top tab titled “New Book…” reveals a drop-down menu listing all of the different resources available.  There you can find:

  • General information about the book
  • The book’s table of contents
  • Supplemental (key documents) and digital (photos, videos, and an audio recording) content associated with specific book chapters
  • Information on how to purchase the book
  • Comments and reviews about the book
  • A list of nonCUNY websites, organizations, key data reports, and best practices concerning student transfer
  • A bibliography of articles and books that discuss college student transfer
  • A list of publications and broadcasts that refer to transfer at CUNY and, more specifically, CUNY’s Pathways initiative (a set of policies designed to smooth student transfer from one CUNY college to another)
  • Examples of uses of the words Path and Pathways, including:
    • Plays on the Words Path and Pathway made during CUNY’s Pathways Project (2011-2013) by CUNY supporters of Pathways, by the PSC (the CUNY Faculty Union), by the UFS (CUNY’s University Faculty Senate), and by outside media
    • Examples of higher education’s use of the words Path and Pathways other than CUNY’s Pathways initiative
    • A few of the many aphorisms and quotations using the words Path or Pathway
Subscribe  

Not All Excess Credits Are the Students’ Fault

A recent article in Educational Evaluation and Policy Analysis reported on an investigation of policies punishing students for graduating with excess credits.  Excess credit hours are the credits that a student obtains in excess of what is required for a degree, and many students graduate having taken several courses more than what was needed.

To the extent that tuition does not cover the cost of instruction, and/or that financial aid is paying for these excess credits, someone other than the student—the college or the government—is paying for these excess credits.  Graduating with excess credits also means that a student is occupying possibly scarce classroom seats longer than s/he needs to and is not entering the work force with a degree and paying more taxes as soon as s/he could.  Thus there are many reasons why colleges and/or governments might seek to decrease excess credits.  The article considers cases in which states have imposed sanctions on students who graduate with excess credits, charging more for credits taken significantly above the number required for a degree.  The article shows that such policies, instead of resulting in students graduating sooner, have instead resulted in greater student debt.  But the article does not identify the reasons why this may be the case.  Perhaps one reason is because students do not have control over those excess credits.

For example, as described in my forthcoming book, Pathways to Reform: Credits and Conflict at The City University of New York, students may accumulate excess credits because of difficulties they have transferring their credits.  When students transfer, there can be significant delays in having the credits that they obtained at their old institution evaluated by their new institution.  At least at CUNY colleges, the evaluation process can take many months.  During that period, a student either has to stop out of college or take a risk and enroll in courses that may or may not be needed for the student’s degree.  Even when appropriate courses are taken, all too often credits that a student took at the old college as satisfying general education (core) requirements or major requirements become elective credits, or do not transfer at all. A student then has to repeat courses or take extra courses in order to satisfy all of the requirements at the new college.  Given that a huge proportion of students now transfer, or try to transfer, their credits (49% of bachelor’s degree recipients have some credits from a community college, and over one-third of students in the US? transfer within six years of starting college), a great number of credits are being lost.

Nevertheless, a 2010 study at CUNY found that a small proportion of the excess credits of its bachelor’s degree recipients was due to transfer—students who never transferred graduated with only one or two fewer excess credits, on average, than did students who did transfer.  Some transfer students may have taken fewer electives at their new colleges in order to have room in their programs to make up nontransferring credits from their old colleges, without adding many excess credits.

But does this mean that we should blame students for those excess credits and make them pay more for them?  Certainly some of the excess credits are due to students changing their majors late and/or to not paying attention to requirements and so taking courses that don’t allow them to finish their degrees, and there may even be some students who would rather keep taking courses than graduate.

But there are still other reasons that students may accumulate extra credits, reasons for which the locus of control is not the student.  Especially in financially strapped institutions, students may have been given bad or no guidance by an advisor.  In addition, students may have been required to take traditional remedial courses, which can result in a student acquiring many of what CUNY calls equated credits, on top of the required college-level credits (despite the fact that there are more effective ways to deliver remediation without the extra credits). Or a student may have taken extra courses that s/he didn’t need to graduate in order to continue to enroll full-time, so that the student could continue to be eligible for some types of financial aid and/or (in the past) health insurance. Students may also have made course-choice errors early in their college careers, when they were unaware of excess-credit tuition policies that would only have an effect years later.

The fact that the imposition of excess-credit tuition policies did not affect the number of excess credits accumulated but instead increased student debt by itself suggests that, to at least some degree, the excess credits are not something that students can easily avoid, and/or that there are counter-incentives operating that are even stronger than the excess tuition.

Before punishing students, or trying to control their behavior, we need to have a good deal of information about all of the different contingencies to which students are subject.  Students should complete their college’s requirements as efficiently as possible.  However, just because some students demonstrate delayed graduation behavior does not mean that they are the ones who are controlling that behavior.  Decreasing excess credits needs to be a more nuanced process, with contingencies and consequences tailored appropriately to those students who are abusing the system, and those who are not.

Subscribe  

Are Faculty Missing in Action?

Last fall, an article in Inside Higher Ed authored by Judith Shapiro, President of the Teagle Foundation and former President of Barnard College, made the following statement:

“For the most part, however, faculty members have simply been missing in action when it comes to dealing with campus upheavals around race and racism.”

I agree with this statement, but I would expand it to say that faculty members have frequently been missing in action with regard to all kinds of controversial issues.  At many (most?) institutions, faculty are rewarded with promotions, raises, and tenure first for their research (largely based on their individual efforts), second for their teaching (again, largely based on their individual efforts), and only third for their service, which would include working together with others to make their colleges congenial and productive places for the colleges’ diverse inhabitants.  The faculty who produce the most work of direct benefit to themselves are largely those faculty who keep to themselves, focus on their own work, and stay out of the way of college conflagrations.  Consistent with this statement, research has shown that faculty do not feel safe expressing views with which others may disagree until they have had the final promotion to full professor (not, as some people think, until they have tenure).

An example of these tendencies concerns credit transfer among the 19 undergraduate colleges of The City University of New York, at which approximately 10,000 students transfer each fall alone.  Credit transfer is a controversial subject, just one reason being that whether the receiving college counts the credits or not can directly affect the college’s, as well as a department’s, funds, and whether faculty members have sufficient enrollment to teach certain courses.  Although ensuring that credits transfer can benefit students, it can also mean depriving faculty and/or a college of something desirable to them.  Thus it is no surprise that, although for over 40 years problems with credit transfer were seen as the worst problems for CUNY students, and although the faculty issued some statements about those problems, the faculty took no actions to solve the problems.  When the central administration finally instituted a system (known as Pathways) that guaranteed credit transfer for some courses, and thus directly affecting some faculty’s courses, only then did some faculty spend significant amounts of time on the credit transfer issue, with most of those faculty objecting to Pathways, including filing law suits against it.  This prompted one CUNY Distinguished Professor, in his testimony at a public hearing on Pathways, to say to the faculty in the audience: “Where have you been?  Where have you been for 40 years?”

Although there is nothing wrong with working hard to benefit oneself, we also need to provide clear incentives for faculty to work together for the benefit of students, as well as for the rest of the higher education community.

There is more about these issues in my forthcoming book:  Pathways to Reform:  Credits and Conflict at The City University of New York, to be published early in the fall by Princeton University Press (https://www.amazon.com/Pathways-Reform-Conflict-University-Education/dp/0691169942/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1494093848&sr=1-1).

Subscribe  

Rewarding Students

Nudging” (consisting of text and/or email messages sent to students about tasks that they should perform) and “early alert” systems (including messages to students whose performance is inadequate or at risk of being inadequate) are gaining popularity in higher education. A blog entry by Matt Reed in Inside Higher Ed points out that, if students receive unpleasant messages, such as unpleasant nudges and early alerts, they will stop reading them, and asks how to counter that.

Here I will not address the technological or other practical aspects of how to deliver messages that get read.  My purpose is only to review some of the findings from the field of behavioral science that help to inform what should be done to make these messages as effective as possible.

First, Reed is correct about the effect of aversive stimuli such as bad-news texts.  We avoid the sources of negative communications (including when one such source is your professor or boss, and such avoidance can cause all kinds of problems).  Therefore, in general, when possible, the messages that we give to students should be about increasing what they are doing that is right, rather than decreasing what they are doing that is wrong—what is known as catching someone doing something well.

We also know from behavioral science that, in order to increase a behavior of a particular person over the long term, the feedback should be:

  • Targeted to the specific behavior that we want to increase (it should not just consist of an amorphous “good job”)
  • Delivered as close in time as possible to the occurrence of that behavior
  • Accompanied by the delivery of something (praise, points, food, etc.) that is of value to the particular person emitting the behavior, i.e., it should be accompanied by a reward (and, no, that will not cause problems by ruining the person’s internal reward system, whatever that is)
  • Not delivered after every instance of the particular behavior, but on an irregular pattern (if every instance is rewarded, then if reward ceases, the behavior will dissipate more quickly than if reward delivery has been irregular)

Some years ago, Hostos Community College of The City University of New York, developed a points system that had many of these attributes.  The Hostos administration reasoned that there were many behaviors that they wanted students to engage in for which there were no immediate rewards such as grades, behaviors such as filing the FAFSA or filling out a college survey.  Hostos therefore awarded students points for these behaviors and then held lotteries in which students could win real prizes.  The more points a student had, the more chances that student had to win.  Although I never saw any data about the results, I was told that this reward system worked well.

In conclusion, when nudging or using early alerts, we should try to send messages that are positive, not just negative, and our positive messages should be targeted to specific behaviors, be timely, be rewarding, and be irregularly delivered.

Subscribe  

The Evolution of the Research on Mathematics Remediation Reform at CUNY and Elsewhere

Traditional mathematics remediation has been described as the largest single academic block to students graduating in the United States.  Most new college students are assessed as needing it, and the majority of those students never complete it—most of the students who take it do not pass, and many students avoid taking it at all.  Without completing assigned mathematics remediation, students usually cannot satisfy many of their college-level required courses, and so cannot graduate.

In 2008, when I became the chief academic officer of The City University of New York system, which includes 10 colleges that offer mathematics remediation (and 9 that do not), mathematics remediation was a big business.  At that time, CUNY was spending over $20 million per year on remediation, with the majority being on math remediation.  In recent years, I believe that figure has increased to over $30 million.

In 2008, CUNY was mostly delivering mathematics remediation as traditional courses.  These were courses covering only remedial material that were taught in a sequence.  Students could not take the next course in a sequence until they had passed the prerequisite course.  CUNY colleges offered two, and sometimes three, levels of math remediation that a student had to pass or test out of before being allowed to take many college-level courses.

However, research reports on various methods for delivering math remediation were appearing.  Some of those reports concluded that being exposed to traditional remedial courses increased a student’s later college success, and others concluded that new methods such as placing students assessed as needing math remediation directly into a college-level course with extra support were more helpful. Some of this research used what are known as quasi-experimental analytical techniques. But it was hard to know what to make of the research as a whole.  As an experimental psychologist, it appalled me that there was insufficient evidence guiding us in these huge expenditures of funds about programs that affected the lives of many thousands of students.

Therefore in 2013, along with Mari Watanabe-Rose and later Dan Douglas, I conducted a randomized controlled trial using over 900 students assessed as needing elementary (remedial) algebra.  We randomly assigned these students to traditional remediation, that course plus a weekly workshop, or introductory, college-level, credit-bearing statistics with a weekly workshop.  Twelve faculty each taught one section of each course type.  The results showed that there were significantly higher pass rates in the statistics course, and 2.5 years later, 8% more of the statistics students have graduated than the traditional remedial course students, and fewer of the statistics students have dropped out of college.  The students who were assigned to statistics have shown that they did not need to pass elementary algebra to pass statistics, nor did they need to pass elementary algebra in order to satisfy their college-level natural and social sciences requirements.

While we were following the ongoing performance of the participants in our experiments, the research literature on remediation had become large enough, including research conducted with rigorous methodology, that research reviews became possible.  Two such recent reviews have both concluded that the weight of the evidence now shows that traditional remediation, in general, makes it more difficult for students to advance in their academic careers.

Thus, in contrast to the state of the research in 2008, we can now conclude, with a fair degree of certainty, that placing students into traditional remediation, in most cases, is not the best path for them in terms of helping them to pass their college-level courses.  Tying math remediation to college-level courses when that remedial work is specifically needed to understand the college-level course material, streamlining and aligning required quantitative material, all seem to be more effective methods for helping students progress with their college-level requirements.

Subscribe  

What Else Can the Logue, Watanabe-Rose, & Douglas (2016) Experiment Tell Us? The Original Experiment and Some Follow-Up Data

In my blog post of January 20, 2017, I addressed the issue of whether or not it makes sense for someone to say that a course on subject A is easier or harder than a course on subject B.  I concluded that it is not possible to make such a comparison for two qualitatively different courses.  There is not a single easiness scale on which we can compare such courses.  That blog piece particularly concerned the research reported in Logue, Watanabe-Rose, and Douglas (2016; http://journals.sagepub.com/doi/pdf/10.3102/0162373716649056), which compared the performance of students (all assessed as needing remedial math) who took elementary (remedial) algebra (what we called Group EA) with those who took college-level introductory statistics with a weekly 2-hr workshop (what we called Group Stat-WS).

Although we cannot compare the easiness of two such qualitatively different courses, we can determine if different groups of students receive similar grades taking the same course, an approach employed in one part of our 2016 experiment. We found that, exposed to the same grading criteria, the Stat-WS students did significantly less well than students who had taken the same course but who had been assessed as needing no elementary algebra.  Nevertheless, the Stat-WS students did significantly better than the EA students, with the majority of the Stat-WS students passing statistics but the majority of the EA students failing elementary algebra.  Students assessed as needing elementary algebra do not need to pass that course in order to pass introductory statistics with an accompanying weekly workshop.  Further, our findings did not differ by instructor, by college, or by the race/ethnicity of the students, and there was approximately the same boost in pass rate from being assigned to Stat-WS versus EA for students with the full range of placement test scores.

We also know that it was not necessary for the Stat-WS students to pass elementary algebra in order to pass other college-level natural and social science courses that they took.  Our 2016 publication showed that the Stat-WS students, during the year after the experiment was over, continued to earn more credits than the EA students, and the Stat-WS students were as likely to pass natural and social science general education courses as the EA students.

We now have 2.5-year follow-up data on the over 900 students who were randomly assigned in our experiment to EA, Stat-WS, or elementary algebra with a weekly workshop (a total of 36 course sections taught by 12 different faculty at 3 CUNY community colleges, with each of the faculty members teaching one section of each course type; these long-term follow-up data were reported at the 2017 AACC conference, see p. 38 of http://www.aacc.nche.edu/newsevents/Events/convention2/Documents/97thAnnualAACCConventionProgram.pdf ). Of the 907 randomly assigned students (approximately 300 in each group), the data show that, so far, whereas 9% of the EA students have graduated, 17% of the Stat-WS students have graduated (a statistically significant difference), and whereas 52% of the EA students have dropped out of college (i.e., are not enrolled in any college in the United States), only 47% of the Stat-WS students are in that category.  Further, once again, our findings do not differ according to the race or ethnicity of the students.  Clearly having been assigned to Statistics without first taking Elementary Algebra in no way harmed, and indeed benefitted, the progress of the Stat-WS students with regard to all of their college courses.

Yet before we, based on these results, assign lots of students to college-level statistics with extra support instead of traditional remedial math, perhaps we should consider programs for associate’s-degree students that may raise all students’ graduation rates, programs that are so successful that we need no longer be concerned about the low graduation rates of students assigned to remedial math.  For example, we now know, from the most rigorous evidence available for any higher education program, that CUNY’s ASAP program (http://www1.cuny.edu/sites/asap/ ) doubles associate’s-degree graduation rates, even for students who are assessed as needing some remediation.  Perhaps if we put all students into ASAP we will no longer need remedial math reform.

However, ASAP’s magnificent success does not mean that we have no work left to do on the problems caused when we assign students to traditional remedial courses.  ASAP only enrolls students who have no or slight assessed remedial need, yet there is a 13% difference in the three-year graduation rates between those two groups of ASAP students, and only 48% of the students with slight assessed remedial need graduate within three years—plenty of room for improvement.  Just because there are interventions such as ASAP that help address the nonacademic reasons for students dropping out, does not mean that we should ignore the academic reasons, and remedial math is the largest single academic reason for students not completing college.  The majority of new students in community colleges are assessed as needing it, and the majority of those students do not complete it.

Colleges may have an incentive to offer many sections of remedial math—such courses are among the most profitable courses because the instructors are among the lowest paid.  However, alternative quantitative pathways, such as employed with Group Stat-WS in our experiment, can increase the retention and graduation rates of students and thus maintain or increase valuable enrollments while simultaneously decreasing the cost per graduate.

Subscribe  

Does Increasing Competition for Transfer Students Always Help Them?

In the most recent previous entry on this blog (March 13), I described an articulation agreement between a community college and a university in New Jersey, an agreement that seems to be founded on decreasing the competition for transfer students and instead promoting their transfer to one particular bachelor’s-degree institution.

However, there are also attempts in process by others to increase competition for transfer students, which should be to the benefit of those students.  An organization named Affordable College has developed an app, as well as other resources, for students seeking to transfer.  Bachelor’s-degree colleges pay a fee to participate with Affordable College.  Then Affordable College presents information about those colleges to students at community colleges who want to transfer.  The community college students can use the information from Affordable College to find out how their credits would transfer to each of the participating bachelor’s-degree colleges.  By participating, these colleges make sure that they’re in the mix for obtaining transfer students, and thus for increased enrollment and revenue.  In this way, assuming community college students actually use Affordable College, there is an incentive for bachelor’s-degree colleges to pay the fee to participate and to state that they will transfer many credits.  The community colleges, in turn, receive a “share of the revenue for each successful transfer,” thus providing them with an incentive to participate.

Such incentives work well in benefitting potential transfer students if those students have several possible destination colleges among which they can choose.  But at my institution, The City University of New York, choices are in fact often much more limited.  The 12 bachelor’s-degree colleges of CUNY do not all offer the same majors.  So, for example, a student can only receive a Bachelor’s in Actuarial Science from Baruch College.

Another constraint on CUNY transfer students is geography.  Although the 19 undergraduate colleges of CUNY are all located within the five boroughs of New York City, it can take up to two hours on public transportation to go from one part of the city, and from one CUNY college, to another.  For CUNY students, who are more likely than not to be Pell Grant recipients, and who often must live with relatives and work in order to make ends meet, a very long commute to and from class, home, and/or work can be prohibitive.

Then, too, some of CUNY’s bachelor’s-degree colleges are very selective, including for transfer students. Just because a student applies to transfer to a particular CUNY college doesn’t mean that that student will be admitted.

Thus, the choice of a destination college for a CUNY transfer student can be quite limited.

Benefitting transfer students by fostering competition among the destination institutions will only be effective if transfer students truly have choices as to where they will go.  Unfortunately for many students this is not the case.  Therefore we need additional mechanisms to help such students, including ways to help them transfer as many of their credits as possible.

Subscribe  

Not All Articulation Agreements are Good for Students

This past fall an article in Inside Higher Ed described what initially sounds like an articulation agreement that could be wonderful for students: “a new partnership, called 3+1, between [Rowan College at Burlington County, a community college, and Rowan University], which allows students to remain on the community college campus while earning a Rowan University degree.  Participating students also get a 15 percent discount and are placed in guided degree pathways from the two-year institution that lead to a bachelor’s degree from the university.”

However, the article goes on to state that this “new program…has prompted the community college to limit any advertisements or promotion for other four-year colleges and universities on its campus.  RCBC will not host transfer fairs or information tables for other four-year programs.”

A bachelor’s-degree college or university, such as Rowan, can see itself as giving up something in making one of these agreements because it has less opportunity to refuse to transfer credits when a student transfers in, and thus less opportunity to earn revenue from transfer students.  However, such an agreement, particularly if it involves essentially eliminating the marketing of Rowan’s competitors, can give Rowan a leg up in obtaining transfer students as compared to other competing bachelor’s-degree institutions.  Giving up some credits may be worth it if, as a result, you get more students.

But what happens to students who, for reasons such as geographical constraints or subject matter interest (e.g., Rowan does not have majors in Anthropology or Architecture), don’t want to transfer to Rowan after completing their associate’s degrees?  It appears that the information that these students will have about other options will be limited, and they will have to do more to find their way in the transfer maze.  Perhaps there are few RCBC graduates who would prefer a bachelor’s-degree institution other than Rowan.  In that case, perhaps RCBC is doing the best thing that it can for the majority of its students in making this agreement, which could significantly help those RCBC graduates who want to attend Rowan to attain bachelor’s degrees.

Even so, it is unfortunate that what is good for each and every student is not the only criterion shaping these policies—that, due to existing incentive structures, self-protection and self-interest inevitably come to play in interactions between independently operating institutions, as they do in other areas of academe.

Subscribe  

Number of Credits Accumulated and the Probability of Graduation

The evidence indicates that the more credits a college student has accumulated, the more likely that student is to graduate.

There are many reasons that this might be the case.  One is that the more credits someone has, the shorter the delay to the reward of graduation, which increases the student’s motivation to do the remaining work needed to graduate.

Another is that the more credits someone has accumulated, and therefore the less time there is to graduation, the fewer the opportunities there are for something to occur in the student’s life that will interfere with graduation.

Still another possibility is that students who have accumulated many credits are more likely to have taken and passed more credits each semester than is the case for other students, and so are also more likely than other students to take and pass more credits per semester in the future.  Such habits help students to complete their degrees.

Accumulating many credits has been described as constituting “academic momentum,” whereby having accumulated credits propels students to completion.

Whatever the reason, it is clear that accumulating more credits increases the probability that a student will complete his or her degree, and so a good bet for helping students to complete consists of helping them to accumulate additional credits, including at a higher rate.

This conclusion also means that, if you want to compare the relative graduation rates of different groups of students that are exposed to different interventions, you need to make sure that, at the start of the interventions, the groups are matched in terms of the numbers of credits that they have already accumulated.

For example, suppose you want to compare the relative graduation rates of students who start at a college as freshmen (what we can call native students) with students who transfer into that college as juniors.  At entry to the college, the transfer students will likely already have accumulated one-fourth to one-half of the total credits that they need to graduate, but the freshmen will have started at the college with zero credits.  To do an apples-to-apples comparison, the transfers need to be compared to students who started at the college as freshmen, but who have already accumulated, on average, the same number of credits as the transfers.  When such a comparison is made, transfers are less likely to graduate than are native students, with a common reason being loss of credits on transfer.

The fact that probability of graduation increases with the number of accumulated credits has implications both for how we help students graduate and for how we investigate what other factors affect their graduation.

Subscribe