The research presented here takes a deliberately cautious stance toward AI methods.
Large language models are powerful, but they are not neutral. They can behave as flexible interpreters, useful comparators, and structured summarizers, but they can also hallucinate, over-smooth, and conceal uncertainty behind fluent language. Lexicons, classifiers, embedding systems, and prompted LLMs should therefore not be treated as interchangeable solutions. They occupy different methodological roles.
One of the recurring themes of this work is that disagreement between methods is often informative. It can reveal construct boundaries, interpretive limits, and differences between sensing and explanation. The aim is not to find one magical model, but to build a saner methodological language for working with longitudinal human text.