RUL Staff networking & communicating re Academic Libraries, Resources, Scholarly Communication, Research Support, Access, Workplace, & more ...
Friday, September 27, 2013
Monday, September 23, 2013
Just Enough of a Good Thing: Indications of Long-Term Efficacy in One-Shot Library Instruction
Students who use Google to search for information on the web are faced with tough decisions and an almost infinite number of choices. Often their problem is under-specified, in that there are endless (more or less effective) strategies for tackling the task and for evaluating solutions. Ideally, students would assess the relevant information about each potential source and make the best selections based on data. However, students, like other decision makers, have physical, cognitive, and time constraints, as well as personal preferences that influence how they search for information on the web, and ultimately how they choose their sources (Agosto, 2002). Today's students are fortunate to live in a time when the availability of information and source choice is so diverse. Unfortunately, the availability of more choices does not always translate to better decision making (Iyengar and Lepper, 2000 and Schwartz, 2000). Researchers have found that when faced with many alternatives, naïve decision makers typically start with simple automatic strategies to eliminate options, and only use more effortful and deliberate strategies when there are very few choices (Lenton and Stewart, 2008 and Payne, 1976). Thus, we would expect students to engage in some level of satisficing, by scanning selections for those that have some minimum easy to understand features, and then stop. (According to Simon (1956), to satisfice is to seek “good enough,” searching until encountering an option that crosses the threshold of acceptability, or implementing any decision that appears satisfactory without evaluating all the alternatives.) Still, since expertise in a field is not necessary to stimulate problem solving by searching for analogies or similarities between the current problem and problems solved in the past (Bearman, Ball, & Ormerod, 2007), students who have been taught how to evaluate sources in one class should be more likely to use those structural similarities to solve their new source evaluation problem. In this examination of student website evaluations, the researchers discovered that library instruction may indeed have provided the knowledge base necessary to increase the likelihood that participants would make analytical and rule-governed decisions about sources. Although past research studies on the efficacy of the one-shot library session have demonstrated mixed results, the current researchers' methods provided an opportunity to measure the relationship between past exposure to library instruction and self-reported use of the library and its resources (such as checking out a book, using a library database, and asking a librarian for help), as well as sophistication and proficiency in the evaluation of web resources.
http://www.sciencedirect.com/science/article/pii/S0099133313001018
LibQUAL Revisited: Further Analysis of Qualitative and Quantitative Survey Results at the University of Mississippi
LibQUAL Revisited: Further Analysis of Qualitative and Quantitative Survey Results at the University of Mississippi
Abstract
This paper updates an earlier 2010 longitudinal study of LibQUAL qualitative and quantitative data from the University of Mississippi libraries with an additional year of LibQUAL data, collated with other library-collected data such as gate-count numbers. In doing so, it identifies several results that are not satisfactorily analyzed by LibQUAL, and it concludes that a more specialized local survey may be helpful to supplement or supplant LibQUAL.
Quality not quantity – Measuring the impact of research
Quality not quantity – Measuring the impact of research
Snippets from Warwick Anderson:
Now more than a decade old, open access is changing where researchers publish and, more importantly, how the wider world accesses – and assesses – their work.
The open access movement is having a significant impact too on how we measure the impact of scientific research.
The San Francisco Declaration on Research Assessment, which has now been signed by thousands of individual researchers and organisations, come out with such a strong statement earlier this year:
“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion or funding decisions.”
Nothing stays the same in science and research. Publishing is set to change further. The democratisation of publishing began with the internet and has a long way yet to run. The challenge for researchers, institutions and funders will be to identify, protect and encourage quality and integrity.
Warwick Anderson is professor and CEO at the National Health and Medical Research Council in Australia. This article, “Quality not quantity: Measuring the impact of published research”, was originally published on 18 September in The Conversation. Read the original article.
Subscribe to:
Posts (Atom)