we had a meeting today in my department to talk about how we’ve each been counting cataloging statistics. the meeting was spurred by the e-book cell on our spreadsheet, which had been curiously empty for a whole year though we’ve been gathering e-books like mad. where were those statistics going? we found and fixed the problem, but a nagging question has remained: if statistics are reported by two people in two different ways, how meaningful is the end result?
i asked this question before in an article i wrote a couple years back, about arl’s digitization statistics related to preservation. while writing the article it became clear to me that universities were interpreting the data categories in different ways, meaning that the data could not be compared across universities. i hope that since that article has been written people have discussed and defined better which information should be supplied for those data categories, with resulting accurate data.
i mentioned in a recent blog post that gathering and interpreting statistics in a large part of my job. in libraries we put such an emphasis on counting and evaluation based on those numbers. why is it then, that as a whole we don’t do a better job of deciding what it is we want to count and how best to report it at the outset?