Monday, October 21, 2013

All about science outcomes - thoughts from the evaluation world

I had the opportunity this year to attend the 2013 annual meeting of the American Evaluation Association. For me it was a good insight into a different professional world. I’ve been exposed to evaluation concepts and done some metric level work, especially with the web and social media, and strategic goal setting and what-not, but nothing too rigorous. Increasingly though, evaluating programs and projects is part of my professional world and seems to be a good trifecta of skill when combined with science communication and project development. So, some intellectual grounding in the topic seemed useful.

On a purely organizational level, I found the different types of sessions they had at the conference to be really refreshing. For example, there weren’t just talks, there were sessions grouped into categories like “skill-building”, “demonstrations”, “think tanks”, etc. This seems reflective of the professional nature of evaluation – while there is certainly a strong research/researcher contingent at that meeting, professional evaluators are very well integrated into the conference, and sessions that go beyond research are useful in a practice-welcoming setting (it would be great professional scientific societies consider these models).

Overall, it was encouraging to see the amount of work being done in the evaluation field – it means that there is much less wheel-inventing that needs to be done in the natural sciences to gain a better foothold in evaluating research and outreach outcomes. Arthur Lupia gave a great plenary talk on the role of legitimacy and credibility in politicized environments, and Michael Patton put out some provocative thoughts on the current state and future of qualitative evaluation methods. Both talks, on a general level, stressed the importance of context, whether interpreting data or communicating. Patton in particular talked about how much the person doing the research matters, which I found really interesting - this may be super common in social science research, but it is quite uncommon in the natural sciences. I think as a practitioner, understanding that what you as a person bring to your work is really important is another area where natural science training doesn't quite prepare people for jobs outside academia (or within).

Patton also spoke quite a bit about measuring "unintended consequences" of projects, as well as "emergence." These are both conceptually interesting for evaluating projects related to the environment as I'm sure both are pretty common. The topic of intervention also came up quite a bit - primarily as in how does the evaluation role end up also being one of interventionist, whether positive or negative.

Social Media Evaluation

On the particular topics that I was following, one of the more interesting sessions was a demonstration on social media evaluation. It was a good demo, but what really struck me was a discussion about the idea of qualitative social media evaluation. We have much better tools to start look beyond numbers of followers and likes and favorites and retweets toward things like “reach” and “engagement”, which is great. But we started to talk about more demographic aspects of engagement – the “who” of who is following you, and why. These are much harder things to measure easily, particularly for a large number of followers. But they also seem really important for beginning to understand the more engaged aspect of online communities.

For example, the workshop organizers, who work on a US government HIV/AIDS website, talked a bit about looking at some of their more “unique” followers in more detail, trying to understand how people that may not be as like-minded end up interacting with their program online. Obviously, this requires understanding your community in a way that is hard with a small staff or program, but the discussion that ensued was interesting.

Social Network Analysis 

I also attended many sessions on social network analysis. One of the more interesting presentations was from a researcher who had done a network analysis during a meeting intended to foster collaboration between researchers and practitioners. As an intervention, they presented the results of the analysis to the participants half-way through, showing that although the two groups were interacting to some degree, it was not nearly as much as they thought. Looking at their interactions in this way helped the participants to focus more on inclusive interactions, hopefully leading to more fruitful collaboration.

Finally, I participated in a one day workshop on listening skills for evaluators (particularly in interview contexts) that was really interesting, and worthy of its own post, which will come soon.

Related posts:

No comments:

Post a Comment