Our institution recently started using an AI detector to screen student submissions. If the score crosses a certain threshold, the work is treated as AI-generated. There is no requirement for manual review. When did probability scores start being treated as final judgments, and is this ethically defensible?

