In my capacities at work (I work in the educational technology department of a school district, where I primarily provide support to our school libraries), I’ve been facilitating a district-wide student choice book award process that I designed with a couple of colleagues. We call it the Granite Book Awards. We designed it so that winners are determined entirely by student votes (it’s actually open to student, teacher, librarian, and parent votes, but the main bulk of the votes are from students – as they should be.) Votes were accepted in three categories, and any book published for the first time in 2013 was eligible. Last week we revealed the winner lists for 2013.
So far I haven’t really been able to evaluate effectively wether the program has been a success or not. I actually feel that in many ways it was unsuccessful, but I have nothing specific or real to which to I can compare it, other than my imaginative and theoretical expectations. I’m relatively pleased with our middle grade fiction list, and I’m even more pleased than that with our picture book list, but I’m not too impressed with our young adult fiction list. So as a generator of decent book lists in my opinion the program was perhaps 66% effective. As a great promotion that engaged students, however, perhaps only 20% effective.
We certainly did not get the number of votes we had hoped for in the YA Fiction category, not even enough to include more than 5 top titles on the final list (the other categories have 10 titles each.) Hence my disappointment with the YA list. And to be honest, probably half of those few votes in the YA category actually came from elementary school students voting exclusively for the most recent book in the Michael Vey series, a nominally YA title that has successfully crossed over into our elementary schools. I’m not sure if this lack of young adult participation was due to bad, not-teen-friendly promotion on our part, lack of buy-in from the librarians in our secondary schools, or lack of student engagement and participation with libraries and librarians in general at some of our schools. I suspect all of this and more is at play.
I feel our greatest success was in the picture book category, where in addition to students casting their votes via an online ballot we encouraged elementary media assistants to read books out loud to lower grade classes and then collect votes by raise of hand from those students who really liked the book. This led to a great list of winning picture books that admittedly was semi-curated by our media assistants and teachers who chose to participate, as the books they chose to read to their students were of course those that received the most votes, far more than the number of votes cast via the online ballot.
It turned out to be much more work than was anticipated to get students of any age to vote independently using the ballot, and it appears from anecdotal information that the only schools who were able to inspire any kind of significant “turn-out” were those who provided incentives to participating students. Simply putting up a sign, talking about the program once, and making the ballot available on library search stations was not enough to inspire participation. We had wrongly anticipated that the voting in and of itself would be an engaging activity and that children would want to share their opinions and get excited to pile on the votes for their personal favorites. This didn’t exactly happen, as far as we can tell.
So, my question now is, do we continue onward with this program for 2014, working harder, giving more energy and promotional attention to the process, creating new and better graphics and web widgets, talking up more newly published books (we need to do this anyway), harassing and bribing students, teachers, and librarians for votes throughout the next calendar year?
Do we augment (or even replace) the votes from voting ballots with circulation statistics gathered automatically in the background from our library management software? This would give us a much fuller and probably more accurate list of the books being read the most in our schools, but would not require or (depending on your perspective) provide the opportunity for students to actively participate in the process. This would perhaps yield better data to aid in collection development, but it would not provide as much of a promotional activity to engage students.
Or do we put this whole process aside and focus our energies elsewhere? Do student choice awards, even if done right and eliciting widespread participation, actually do anything to increase student engagement, student excitement about reading, or student learning? If so, how will we know? Was ours done right but just not enough? Are there other programs out there that are done right that we can look to for guidance?
A third option would be to throw our energies and participation fully into our state’s student choice book award program, the Beehive Book Award. Should we have simply become major participants and contributors to that award program in the first place?
To be frank we initially created our own award program largely due to our dissatisfaction with that state-level award program. Over the years we became unimpressed with what we saw as their disorganization, lack of an up-to-date or meaningful web presence, lack of transparency, and lack widespread/diverse participation in their committees that decided the finalists. Since that time of our initial disenchantment I feel they have done much to improve in all of these areas, but our major point of contention with the program remains: we feel that many of their book choices are irrelevant to our students or at the least are not the books we want to put large amounts of our budgets and energy into encouraging students to read. In starting our own district award we felt confident that we could generate better book choices ourselves, especially by crowd-sourcing our students, teachers, and librarians. I think our winning lists from this year do better reflect the reading tastes of our students than the Beehive lists, but again, it is unclear to me whether that increased relevance has any real positive effects on student learning, which should be our goal.
An admitted value of the Beehives is that they can expose students to books they might not otherwise hear about or read, whereas our Granite Book Award lists to a certain extent just reflect back to students what they are already reading. But shouldn’t a so-described children’s choice award do that? If many of the most read and most popular books are not available as options on a children’s choice ballot, would not that undermine the legitimacy of that award in the perceptions of the students? Shouldn’t it be an actual forum and representation of their interests and opinions? Some of the books on the Beehive lists are indeed awesome books that deserve more promotion from libraries and need to get in front of the eyes of more kids. But that’s a different mission than a children’s choice award. Which mission is more important, and is a hybrid like the Beehives actually a pretty good solution that we should just roll with?
We suspect that our participation levels in the Granite Book Awards this year were not dissimilar to the participation levels for the Beehive Book Award. However we have no way to know this. It would certainly be instructive (not only to us but to the Beehive people) if we could compare those numbers and use them to determine how to improve or continue. (Because of another work project, I have this kind of results-centered, practice-changing collaborative data analysis on the brain a lot lately, which is probably a good thing to have on the brain.)
I’m not that great at reaching out to people to collaborate and share in person; I’m apparently better at publishing criticisms on an obscure blog. However, I may be trying to contact Beehive people soon and get involved. If you are one of them, will you forgive me for my criticisms of your organization? I really just want us all to get better at this for the good of our students and patrons. And I certainly don’t have all the answers. I don’t know if I would trust that anyone does.