Information retrieval evaluation Donna Harman.

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evalua...

Full description

Saved in:
Bibliographic Details
Online Access: Access E-Book
Access Note:Access to electronic resources restricted to Simmons University students, faculty and staff.
Access to electronic resources restricted to Simmons University students, faculty and staff.
Main Author: Harman, D. K.
Format: Electronic eBook
Language:English
Published: San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, c2011.
Series:Synthesis digital library of engineering and computer science.
Synthesis lectures on information concepts, retrieval, and services, # 19.
Subjects:
LEADER 05003nam a2200577 a 4500
001 b2132986
003 CaEvIII
005 20110627125117.0
006 m e d
007 cr cn |||m|||a
008 110618s2011 caum foab x000 0 eng d
020 |a 9781598299724 (electronic bk.) 
020 |z 9781598299717 (pbk.) 
024 7 |a 10.2200/S00368ED1V01Y201105ICR019  |2 doi 
035 |a (CaBNVSL)gtp00548366  |a (MBSi)1mc201105ICR019 
040 |a CaBNVSL  |c CaBNVSL  |d CaBNVSL 
050 4 |a ZA3075 .H275 2011"^"ZA3075 .H275 2011 
100 1 |a Harman, D. K.  |q (Donna K.) 
245 1 0 |a Information retrieval evaluation  |h [electronic resource] /  |c Donna Harman. 
260 |a San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) :  |b Morgan & Claypool,  |c c2011. 
300 |a 1 electronic text (x, 107 p.) :  |b digital file. 
490 1 |a Synthesis lectures on information concepts, retrieval, and services,  |x 1947-9468 ;  |v # 19 
500 |a Part of: Synthesis digital library of engineering and computer science. 
500 |a Series from website. 
504 |a Includes bibliographical references (p. 87-105). 
505 0 |a 1. Introduction and early history -- Introduction -- The Cranfield tests -- The MEDLARS evaluation -- The SMART system and early test collections -- The Comparative Systems Laboratory at Case Western University -- Cambridge and the "Ideal" Test Collection -- Additional work in metrics up to 1992 -- 
505 8 |a 2. "Batch" Evaluation Since 1992 -- 2.1. Introduction -- 2.2. The TREC evaluations -- 2.3. The TREC ad hoc tests (1992-1999) -- Building the ad hoc collections -- Analysis of the ad hoc collections -- The TREC ad hoc metrics -- 2.4. Other TREC retrieval tasks -- Retrieval from "noisy" text -- Retrieval of non-English documents -- Very large corpus, web retrieval, and enterprise searching -- Domain-specific retrieval tasks -- Pushing the limits of the Cranfield model -- 2.5. Other evaluation campaigns -- NTCIR -- CLEF -- INEX -- 2.6. Further work in metrics -- 2.7. Some advice on using, building and evaluating test collections -- Using existing collections -- Subsetting or modifying existing collections -- Building and evaluating new ad hoc collections -- Dealing with unusual data -- Building web data collections -- 
505 8 |a 3. Interactive Evaluation -- Introduction -- Early work -- Interactive evaluation in TREC -- Case studies of interactive evaluation -- Interactive evaluation using log data -- 
505 8 |a 4. Conclusion -- Introduction -- Some thoughts on how to design an experiment -- Some recent issues in evaluation of information retrieval -- A personal look at some future challenges -- Bibliography -- Author's biography. 
506 |a Access to electronic resources restricted to Simmons University students, faculty and staff. 
506 |a Access to electronic resources restricted to Simmons University students, faculty and staff. 
510 0 |a Compendex 
510 0 |a INSPEC 
510 0 |a Google scholar 
510 0 |a Google book search 
520 3 |a Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. 
530 |a Also available in print. 
538 |a Mode of access: World Wide Web. 
538 |a System requirements: Adobe Acrobat Reader. 
650 0 |a Information retrieval  |x Evaluation. 
650 0 |a Information storage and retrieval systems  |x Evaluation. 
690 |a Morgan & Claypool 
830 0 |a Synthesis digital library of engineering and computer science. 
830 0 |a Synthesis lectures on information concepts, retrieval, and services,  |x 1947-9468 ;  |v # 19. 
856 4 8 |3 Abstract with links to full text  |u https://ezproxy.simmons.edu/login?url=http://www.morganclaypool.com/doi/abs/10.2200/S00368ED1V01Y201105ICR019  |y Access E-Book 
907 |a .b21329862  |b 150707  |c 200605 
913 |a - 
945 |g 1  |j 0  |l elere  |o -  |p $0.00  |q    |r    |s e  |t 12  |u 0  |v 0  |w 0  |x 0  |y .i19308875  |z 150707 
998 |a elere  |b 150707  |c m  |d x  |e -  |f eng  |g cau  |h 0 
999 f f |i 97ced87a-a997-11ea-8da7-1466fadbd8b9  |s 80fdd9ad-a8bb-4196-a9f0-d127d1516fb7 
852 |b Online Resources  |h ZA3075 .H275 2011"^"ZA3075 .H275 2011  |0 c6bfb68a-a99d-11ea-b550-3a67fadbd8b9