The Importance of Being Driven – By Data: Part Two
Dr. Susan Coia Gailey
Founder
Data-based Institutional Research, Assessment & Reporting Systems
Continued from The Importance of Being Driven – By Data: Part One
Sometimes, however, even simple reporting is not so simple. As you may know, savvy schools return The Common Data Set (CDS), which populates The College Board?s popular web site. It?s a one-shot survey to publishers that reduces our reporting burden, it?s free publicity and it promotes consistency in external reporting. How would it look if your school is called on the carpet over inconsistent figures for a bond issuance? You lose credibility. In the very least, schools that have been accused of reporting incorrect figures are embarrassed and dealing with corrections consumes time and energy that could be better spent. Moreover, completing these reports is tedious and time-consuming for data-disorganized institutions. Ironically, it is oftentimes data-disorganized offices in institutions that claim to be too busy to improve data management while, in reality, data management deficits make them busy.
In addition to IPEDS, and perhaps The CDS, many of you complete surveys for college rankings, and have a Fact Sheet and/or Fact Book. Many of you have a suite of internal reports that contain even more figures. Does your suite of reports give you all the information you need? How do you know whether you receive exactly what you need?
Consider this:
1. Does the information you glean from your suite of reports change how you do things?
2. How long must you wait for just one ad hoc report or an ad hoc request for just one figure?
Let?s say we have a table of historic four-, five- and six-year graduation rates in our Fact Book. The table informs us of how we?re doing; year-to-year persistence rates provide benchmarks. However, the table does not tell us how to increase our graduation rate. By itself, the table gives us no such insight. How do we influence this important outcome of interest ? and others?
Let?s continue with this example. A graduation rate is considered a descriptive statistic. Data-based/analytical research explains the statistic. It identifies the multiple predictors of this metric of student success and institutional effectiveness so we can develop strategies and tactics to influence it. Data-based research combines predictors of graduation into even more powerful applicant and student profiles. Profiles are ?multivariate;? they consist of sets of predictors. With data that originate in the Admissions and Financial Aid operations we can identify profiles of Best Fit students for our school at the time we?re making acceptance decisions. We can rate the probability of an applicant being at-risk for dropping, instead of relying solely on our ?Early Alert? system, when the student is in free fall.
Research on our institution might confirm our suspicion, supported by higher education literature, that ability to pay is associated with persistence to graduation. Research will take us a step further and quantify just how much it matters for various pockets of students so we can award accordingly. First research identifies a ?metric? for ability to pay that is significant at our school, a metric such as Expected Family Contribution? (EFC), and then research gives us the probability of graduating that is based on changes in EFC. When research proves a bivariate relationship between EFC and graduation, it helps to inform our tactic of providing grants to improve our graduation rates. Research informs the critical amounts of discounts based on EFC that will increase enrollment and persistence rates. Suppose that ability to pay is not the only heavy hitter in predicting graduation.
Institutional research might confirm our suspicion that academic preparation and academic commitment also affect persistence to graduation at our institution. Research identifies metrics for these constructs, too. Research takes all of these significant predictor-outcome relationships a step further when it combines all predictors into applicant and student profiles. Such profiles are much more powerful in predicting important outcomes of interest such as graduation. Now we can decide which applicants to admit and which applicants might benefit from a form of financial, academic and/or counseling support so that we can proactively impact our graduation rates instead of just reporting them. We track/assess the progress of our strategies, tactics and interventions by reporting year-to-year persistence rates, as well as GPA, which is one important measure of academic performance. We can more finely determine strategies, tactics and interventions, such as how to distribute our limited amount of financial aid to effectively and efficiently enroll and retain students who are the Best Fit for our institution.
Research can even develop an Academic Preparation Index of multiple measures of academic preparation to interface with EFC. The Academic Index and EFC can be used as parameters for a financial aid awarding matrix. Research will report figures in a matrix that is defined by academic preparation and ability to pay. You see historic enrollment, persistence and graduation rates in each ?cell? of the awarding matrix along with average grants and loans. Your awarding strategy has been informed by data-based research on years of history and then you actively track progress toward your enrollment goals during the admissions cycle by populating the matrix with depositors every week or so. This gives you the chance to revisit your game plan and, perhaps, do some target marketing to shape enrollment instead of just waiting for the final enrollment report after the dust has settled. You might even have a different awarding matrix for different areas of study (e.g., nursing, law, education) or some other classification of interest.
Continue to part three by clicking here.