Serviceeinschränkungen vom 12.-22.02.2026 - weitere Infos auf der UB-Homepage

Treffer: Evaluating interactive systems in TREC

Title:
Evaluating interactive systems in TREC
Source:
Evaluation of information retrieval systemsJournal of the American Society for Information Science. 47(1):85-94
Publisher Information:
New York, NY: John Wiley & Sons, 1996.
Publication Year:
1996
Physical Description:
print, 27 ref
Original Material:
INIST-CNRS
Document Type:
Fachzeitschrift Article
File Description:
text
Language:
English
Author Affiliations:
City univ., cent. interactive systems res., dep. information sci., London EC1V 0HB, United Kingdom
ISSN:
0002-8231
Rights:
Copyright 1996 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Sciences of information and communication. Documentation

FRANCIS
Accession Number:
edscal.3012838
Database:
PASCAL Archive

Weitere Informationen

The TREC (Text REtrieval Conference) experiments were designed to allow large-scale laboratory testing of information retrieval techniques. As the experiments have progressed, groups within TREC have become increasingly interested in finding ways to allow user interaction without invalidating the experimental design. The development of an interactive track within TREC to accommodate user interaction has required some modifications in the way the retrieval task is designed. In particular there is a need to simulate a realistic interactive searching task within a laboratory environment. Through successive interactive studies in TREC, the Okapi team at City University London has identified methodological issues relevant to this process. A diagnostic experiment was conducted as a follow-up to TREC searches which attempted to isolate the human and automatic contributions to query formulation and retrieval performance.