Usability testing has long been a core interest of HCI research and forms a key element of industry practice. Yet our knowledge of it harbours striking absences. There are few, if any detailed accounts of the contingent, material ways in which usability testing is actually practiced. Further, it is rare that industry practitioners' testing work is treated as indigenous and particular (instead subordinated as a 'compromised' version). To service these problems, this paper presents an ethnomethodological study of usability testing practices in a design consultancy. It unpacks how findings are produced in and as the work of observers analysing the test as it unfolds between moderators taking participants through relevant tasks. The study nuances conventional views of usability findings as straightforwardly 'there to be found' or 'read off' by competent evaluators. It explores how evaluators / observers collaboratively work to locate relevant troubles in the test's unfolding. However, in the course of doing this work, potential candidate troubles may also routinely be dissipated and effectively 'ignored' in one way or another. The implications of the study suggest refinements to current understandings of usability evaluations, and affirm the value to HCI in studying industry practitioners more deeply.