Why CEDAR

The case for a different kind of analytics


The gap in the middle

Most universities have significant analytics infrastructure. They have enterprise systems — Banner, PeopleSoft, Workday — that manage operational data. They have institutional research offices that produce reports for accreditors and senior leadership. They may have purchased specialized analytics platforms that promise insight into student success.

And yet department chairs routinely make decisions about course scheduling, staffing, and curriculum without data that speaks to their actual questions.

This isn’t an accident. Enterprise systems are built for administration. IR offices serve the provost and the board. Analytics platforms are purchased at the institutional level and configured to answer institutional-level questions. The people who run academic programs — chairs, graduate directors, associate deans — are a different audience with different questions, and they are largely underserved.

CEDAR is designed for that gap.


What vs. why

The analytics most institutions already have are good at answering what. How many students enrolled? What’s the headcount by major? How many sections ran last fall?

The questions that actually drive curriculum decisions are different. They ask why and what does it mean:

  • Which sections consistently fill slowly, and is there a pattern to when they cancel?
  • Do students who take Calculus before Physics perform differently than those who take them simultaneously?
  • What characteristics distinguish students who succeed in the second course of a sequence from those who struggle?
  • How do DFW rates shift when a course moves from a tenure-track instructor to contingent faculty?

These questions exist at every institution. They come up in every curriculum committee, every program review, every conversation between a dean and a department chair. They almost never get answered — not because the data doesn’t exist, but because the infrastructure for asking them doesn’t.

CEDAR is built to ask them.


Reproducible by design

When a central analytics office answers a data question, you typically receive a number. Sometimes a chart. Rarely an explanation of how the number was derived, what was included or excluded, and what assumptions shaped the result.

This matters more than it might seem. Numbers without methodology are hard to defend, impossible to replicate, and difficult to build on. When a chair presents enrollment data to a curriculum committee, or when a dean makes a case to the provost, or when an institution documents student success outcomes for an accreditor, the methodology is part of the answer.

CEDAR produces analyses from documented, inspectable code. When you run an analysis, you have:

  • The result
  • The code that produced it
  • The ability to reproduce it exactly, for a different term or department or course
  • The ability to share the analysis — not just the output — with a colleague at another institution

This is what reproducibility means in practice. It’s standard in research. It’s almost entirely absent from institutional analytics.


The collective model

The same questions come up everywhere. A graduate director at a research university wondering why some students succeed in a gateway course and others don’t is asking the same question as a graduate director at a liberal arts college, a regional university, a community college. The data is different. The question is the same.

Most institutions solve this problem independently, poorly, or not at all. Each IR office reinvents the analysis. Each department chair waits for a report that may or may not address what they actually asked. Each institution pays for software that doesn’t answer the question.

CEDAR’s approach is different. Analyses are built as cones — modular, documented, shareable units that answer specific questions. A cone developed to understand course sequence outcomes at one institution can be adapted and used at another. Solutions accumulate. The work compounds.

This is what “collective” means. Not just open source — though CEDAR is that. Not just free — though CEDAR is that too. It means that solving a problem well, once, in a form others can use, is worth more than solving it adequately, repeatedly, in isolation.


Honest about where we are

CEDAR is early-stage software with real, working analyses. It began at the University of New Mexico and currently reflects the data infrastructure and questions of one research university. The data model is designed to be institution-agnostic — if you can export standard enrollment data, you can use CEDAR — but adaptation requires technical work.

We’re not going to claim a thriving community that doesn’t exist yet. What exists is a platform with a genuine architecture, working analyses that chairs and directors have found useful, and a clear vision of what this could become with more institutions involved.

If that sounds like something worth being part of — whether you’re an IR analyst, a data-savvy researcher, or someone who has simply watched good questions go unanswered for too long — we’d like to hear from you.

Explore what CEDAR can do → Contribute to the project → Get in touch


Back to top

CEDAR is open source software for higher education analytics.