Home
I study what happens to people when institutional systems don't fit them. Sometimes I build better tools for noticing that. The PhD happened because I kept asking questions that the tools I could find didn't quite answer.
Nolan (the gray one) contributes by sitting on whatever I'm currently looking at.
Complex responsive processes.
A Luhmann essay that has been open for two weeks.
Two hundred hours in. The game remains unimpressed. This is, somehow, not a deterrent.
this is, technically, what I asked for.
that one surprised me too.
more durable than the answer.
this is not a metaphor. it is also a metaphor.
Research
From what I've read, most frameworks for academic resilience measure whether someone expects to struggle. I haven't found one that measures what happens after. Grit looks forward. Growth mindset looks forward. Self-worth theory looks forward. I couldn't find an instrument built for the aftermath. That's the gap that caught my attention.
To understand what happens in that gap, I use a mix of approaches. On the quantitative side: statistical techniques that try to map who follows which path and why. Grouping students by the trajectories they actually follow rather than averaging everyone together. On the qualitative side: thousands of open-response narratives, processed through computational analysis for patterns in how people describe what happened to them.
The thing that keeps this grounded: every measurement decision eventually lands in someone's real situation. Build a tool that measures the wrong thing, and you design an intervention around the wrong thing. The instrument shapes what you can see.
Tools
I needed something and didn't have $12 a month for it. A SQLite database and a Saturday got me close enough. This is a recurring pattern. Research pipelines. Local LLMs for personal finance. Voice memos that become searchable notes. A horror-themed terminal learning environment, because bash documentation should not be the scariest thing you encounter.
The data stays local. The tools stay free. Software that works for the person using it rather than the company selling it is not a radical position, though it sometimes gets treated like one.
Background
I argued against creating the program I now run. On the committee that proposed it, I was the dissenting voice. I didn't think it was doable. When it got approved, I took the job anyway. The failure would be spectacular. The success would prove me wrong. Both seemed worth showing up for.
My undergraduate transcript has 45 credit hours of F's on it, which is honestly an impressive amount of not-getting-it-right. I left, did some other things, eventually came back and found my way into student affairs — building support systems for people navigating the same kind of rough patches I'd been through. The path wasn't planned, but it turns out lived experience is pretty good training material.
Don't change the student. Change the system's understanding of the student. That's the design principle underneath everything else on this site.
The doctoral program happened because "does this actually work?" turned out to be a question that required better tools to answer than I had. Psychometrics. Qualitative methods. Learning theory. Literacy studies. The academic side gives me frameworks and rigor. The practitioner side keeps asking whether any of that actually helps the person in the room.
How I Think
Two frameworks disagree about why a student fails. One says the student's beliefs are the mechanism. The other says the student's environment is. My read is that they're both onto something. The part worth studying is the interaction. That's where I keep ending up.
where people sit
Every person exists inside layers of context. Their classroom, their institution, their family, their culture. Bronfenbrenner mapped these layers out. The insight is that you can't fully understand what someone does without understanding what surrounds them. This sounds obvious until you actually try to measure it.
Luhmann went further: systems don't just contain people. They have their own logic, their own way of observing, their own blind spots. A university has goals that aren't the same as any individual student's goals. Those misalignments are where things get worth studying.
how change happens
Complex adaptive systems research says: small things can have large effects, and large interventions can disappear without a trace. There's no straight line from input to output when the system has many parts that interact. This is frustrating if you want clean findings. It's accurate if you work in higher education.
Complex responsive processes research goes further. It says meaning itself emerges from interaction. You can't design for it in advance. You can only create conditions where it's more likely to happen, and then pay attention to what comes out.
assets, not deficits
Asset-based frameworks start with the question: what's already working? This isn't optimism. It's a design principle. If you build interventions around deficits, you tend to get more deficit-mapping. If you start from existing strengths, you get something more durable. The evidence for which approach produces better outcomes is genuinely mixed, which is itself a finding worth sitting with.
the story someone tells about what happened
Attribution theory asks: when something happens, how do people explain it to themselves? The explanation shapes what they do next. Someone who attributes a setback to something permanent and personal behaves differently than someone who sees it as situational and specific. This matters enormously for how you design support. The story someone tells about what happened is part of what happened.
The map above is interactive. Hover or click nodes to see how these ideas connect to each other and to the work. Some of these connections are well-established in the literature. Some are my own wiring.
The Work
One conversation. That's what the data kept showing. A single structured contact was associated with roughly doubling the rate at which students returned to good academic standing. Not a semester of intervention. Not ten touchpoints. One conversation. The data was checked several times.
served, FA23–FA25
to good standing
zero → one contact
stayed there the next term
What makes this worth studying is the trajectory, not the snapshot. Over five terms, engagement rates declined while effectiveness increased. Fewer contacts. Each one doing more. That is not the direction the math suggests. Something about the quality or timing of the contact changed. Or the population changed. Or both. The data is not equipped to fully separate these. The honest version of this finding includes that sentence.
The students who engage are telling you something about where they are. The students who don't engage are an entirely different question.
For students appearing in multiple terms, high-engagement terms are associated with roughly a quarter of a grade point difference compared to their own low-engagement terms (p<0.0001, Wilcoxon signed-rank). Same person. Different conditions. This is as close to a controlled comparison as observational data gets, which is to say: closer than most, not as close as an experiment, and that distinction should be in every sentence that cites this number.
All estimates are observational. Selection effects are not fully controlled. Students who engage may differ from those who don't in ways not captured in these data. "Associated with" is doing real work in these sentences.