Blogs
Hi! I’m Michaele Webb and I am a PhD student at Syracuse University. My research interests include rural education and conducting evaluations in rural areas.
Hot Tips:
Follow the lead of individuals in the program you are evaluating.
These individuals are familiar with the everyday life of the program; they have first-hand knowledge of what is and isn’t working. They have developed an understanding of the program and client cultures. They can provide information regarding what is and is not acceptable in the program context.
Just because you have conducted an evaluations for a particular group does not mean that you can run all evaluations you conduct with that group in the same way. While this may seem straightforward, it is something I sometimes overlook. In my researching rural programs, I have learned that what rural looks like in one area may be very different from what rural looks like in another. For example, in rural Alaska evaluators may travel by plane to reach their population, while in rural Louisiana, they might travel by boat. Also, while some rural areas have very diverse populations, others don’t. So, learn from evaluations you have conducted, but do not try to replicate them with a new population or environment.
Cultural Competence isn’t something you learn from a textbook.
During my time as a PhD student, I have learned that no matter how much time I spend reading about the population I am working with, the most important thing that I can do is to get out and talk with them first hand.
Lesson Learned: Sometimes even the most rigorous evaluation won’t help the population if you do not use culturally competent evaluation practices.
If you do not keep the culture of the group you are working with in mind, the evaluation results might not be valid because they do not accurately assess what is occurring within that particular group.
Evaluators need to be aware of the norms of the particular group they are working with. If an evaluation violates the norms, the individuals may be quick to dismiss the evaluation results.
Culture can impact how individuals access information. If you are not aware of how information is spread within the community you are working with, you might not get the information to all the people who need it. Also, you may present it in a way that makes it difficult for them to understand.
Rad Resources:
The AEA statement on Cultural Competence In Evaluation
The Centers for Disease Control and Prevention Practical Strategies for Culturally Competent Evaluation: Evaluation Guide
Cultural Competence in Evaluation: An Overview By Saumitra SenGupta, Rodney Hopson, and Melva Thompson-Robinson, in New Directions for Evaluation,102, 2004).
The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SWTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Cultural Competence Week: Dominica McBride on AEA 2013 and Spreading the Word on Cultural Competence
CC Week: Cindy Crusto on Introduction to AEA Public Statement on Cultural Competence in Evaluation Week
Cultural Competence Week: Melanie Hwalek on the Adoption of the AEA Public Statement on Cultural Competence in Evaluation - Moving From Policy to Practice and Practice to Policy
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:32am</span>
|
Hello! I am Kathy Bolland, and I serve as the assessment coordinator in a school of social work. My educational and experiential background in research and evaluation helped to prepare me for this responsibility. I am a past AEA treasurer, past chair of the Teaching of Evaluation Topical Interest Group (TIG), and current co-chair of the Social Work Topical Interest Group. I also manage our AEA electronic discussion venue, EVALTALK.
Lesson Learned: Although many professional schools have been assessing student learning outcomes for several years, as part of their disciplinary accreditation requirements, many divisions in the arts and sciences have not. Although not all faculty and administrators in professional schools approve of formal attempts to assess student learning outcomes as a means of informing program-level improvements, at least they are used to the idea. Their experiences can help their colleagues in other disciplines see that such assessment need not be so threatening—especially if they jump in and take a leading role.
Lesson Learned: Evaluators, even evaluators with primary roles in higher education, may not immediately notice that assessment of student learning outcomes bears many similarities to evaluation. People focused on assessment of learning outcomes, however, may be narrowly focused on whether stated student learning outcomes were achieved, not realizing that it is also important to examine the provenance of those outcomes, the implicit and explicit values embodied in those outcomes, and the consequences of assessing the outcomes. When evaluators become involved in assessing student learning outcomes, they can help to broaden the program improvement efforts to focus on stakeholder involvement in identifying appropriate student learning outcomes, on social and educational values, and on both intended and unintended consequences of higher learning and its assessment.
Hot Tip: Faculty from professional schools, such as social work, may have experiences in assessing student learning outcomes that can be helpful in regional accreditation efforts.
Hot Tip: Assessment councils and committees focused on disciplinary or regional accreditation may welcome evaluators into their fold! Evaluators may find that their measurement skills are appreciated before their broader perspectives. Take it slow!
Rad Resources: Ideas and methods discussed in American Journal of Evaluation; New Directions in Evaluation; Evaluation and Program Planning and other evaluation-focused journals have much to offer to individuals focused on assessing student learning outcomes to inform program improvement (and accreditation).
The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Lisa Bertrand on External Evaluation: Preparing for the Process of Program Accreditation
Climate Ed Eval week; Rachel Becker-Klein on Using Embedded Assessment Tools for Evaluating Impact of Climate Change Education Programs on Youths
Michelle Baron on Building a Culture of Assessment
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:31am</span>
|
Dear Evaluators, will you be my valentine? Sheila B Robinson here, aea365’s Lead Curator and Saturday contributor feeling a bit mushy today.
On holidays, we are reminded of the way we feel about life, ourselves, and each other. We express thanks on Thanksgiving, appreciation for military personnel on Memorial Day, and hope for the future on New Year’s Day. And of course, today is a day we traditionally express love for one another.
Lesson Learned: I love evaluation work and evaluators, but when I hear myself say that, I wonder why. With my first career - teaching - the reason for my passion was clear. I love people. I love teaching and learning. I love working on teams, taking pleasure in our shared passion for the work, and admiring colleagues who share so generously their time, attention, and knowledge. I love rooting for the underdog, helping build up those who are down, and find tremendous joy in watching them succeed.
But why evaluation? Sure, I love collecting and analyzing data as much as the next evaluator, but is that really it?
I offer two brief anecdotes as illustration:
1. For a graduate school course on qualitative research, I studied a small local music store and its owner. His passion was singular, his work ethic admirable, and his mantra was "it’s ALL about the music." Despite all, business was not booming. I wondered what made the operation tick, so to speak, as it had been barely surviving (from a fiscal perspective) for decades, yet wildly popular with its cult following. My findings? It was not "all about the music." It was about the people, their shared passion and connections, and social relationships. Music was just the raw material.
2. I started a conversation about cars with the owner of a very successful automotive business and received a tepid response to my excitement about a particular model. I asked, "so then, what cars do you love?" His response: "I don’t really love cars. I’m a businessman. I love running a business. Cars are just the vehicle - uh, no pun intended."
So why evaluation? Same reasons as above. It’s all there in evaluation work as well, as I’m certain it is in law, medicine, or business too.
Rad Resource: YOU. Inspiration for this post came from an interview I granted to grad students from my university. They mentioned they had also interviewed two of the biggest names in evaluation. The students then asked me why, with all the blogs, free resources, accessibility and approachability of the "rock stars," evaluators seem to be so generous with their time, knowledge, and intellectual property when that doesn’t appear to be the case in other disciplines. I think I have the answer.
LOVE.
Image credit: seaside rose garden via Flickr
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Teaching Tips Week: Nick Fuhrman on Using Graduate Students as Evaluation Consultants
Brian Silvey on Grade Points and Evaluation
Corinne Singleton, Linda Shear, & Savitha Moorthy on Innovative Teaching & Learning Research
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:29am</span>
|
Hi all, we’re blogging today from the National Resource Center on Domestic Violence. Cris Sullivan is NRCDV’s Senior Research Advisor, and Annika Gifford is Senior Director of Policy and Research. Together with CEO Anne Menard, one of our projects has focused on helping domestic violence organizations evaluate how their services impact domestic violence survivors and their children.
Domestic violence (DV) programs have been undergoing scrutiny to demonstrate that they are making a significant difference in the lives of those using their services. Increasingly, funders are expecting them to demonstrate that their efforts are resulting in positive outcomes for survivors.
In addition to the issues facing all nonprofits trying to evaluate their impact (e.g., little to no money, time or expertise), DV programs have the following additional factors to consider:
They are often working with people in crisis who may not be in a space to engage in program evaluation.
They have to consider safety and confidentiality of the people with whom they work (so, for example, cannot contact people later through mail).
Some funders expect DV programs to have unrealistic or even victim-blaming outcomes (e.g., "victims will leave the relationship").
DV programs recognize that each survivor seeking help has their own individual needs, life experiences, and concerns. Services are tailored to each person, making program evaluation that much more difficult.
Rad Resource: To help domestic violence programs evaluate their work on their own terms — and with no extra money or time — we have created an online resource center that houses a great deal of free and accessible resources.
Among other things, The DV Evidence Project houses a theory of change that programs can use to demonstrate the process through which their services result in long-term benefits for survivors and their children. The site also provides brief summaries of the evidence behind shelters, advocacy, support groups and counseling (demonstrating that programs are engaged in "evidence-based practice"). Finally, evaluation tools are provided so that programs don’t need to re-invent the wheel. These evaluation tools include client surveys, tips for engaging staff in evaluation, strategies for gathering the data in sensitive ways, and protocols for interpreting and using the findings. We hope these resources are helpful to those in the field doing this incredibly important work!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
CP TIG Week: Alexis V. Marbach on Evaluability Assessment Used in Domestic Violence Programs
CP TIG Week: Helen Singer, Sally Thigpen, and Natalie Wilkins on Understanding Evidence: CDC’s Interactive Tool to Support Evidence-Based Decision Making
MIE TIG Week: Ray Kennard Haynes on the Use of Domestic Diversity Evaluation Checklists in Higher Education
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:29am</span>
|
Hi, I’m Jen Przewoznik, Director of Prevention and Evaluation at the North Carolina Coalition Against Sexual Assault. I have been working with and within lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) communities for 15 years. I’d like to share some thoughts about conducting research with and within LGBTQI+ communities that I have learned, using as an example a current study I am co-investigating.
Research with and within LGBTQI+ communities has happened for decades. More and more of this research is conducted by people who are well trained in data collection and analysis regarding people who claim non-normative sexual and gender identities. Unfortunately, a lot of this research still misses the mark. Some researchers, agenda-driven, "miss the mark" because they are actively trying to defame LGBTQI+ people. Most studies, however, seem to miss the mark due to fundamental design flaws. There are still measurement tools being created (maybe right now?!?! Let’s hope not right now) that conflate sexual orientation and gender identity.
Hot Tip: Friends don’t let friends conflate sexual orientation and gender identity. I know you wouldn’t do this, but if you see a researcher doing this, please tell them to stop.
Hot Tip: Engage BOTH LGBTQI+ people and researchers in the process of creating instruments to better understand LGBTQI+ lives and experiences. Myself and Juliette Grimmett, NC Sexual Violence Prevention Team member, are collaborating with Drs. Paige Hall Smith and Leanne Royster of UNC Greensboro on a study about LGBTQI+ peoples’ experiences with sexual violence on NC College Campuses. The results will help campuses create inclusive and affirming sexual violence prevention programming. We began by holding a daylong semi-structured qualitative discussion group to engage folks in conversations about sexual violence and LGBTQI+ communities. People were chosen for their experience in sexual violence or LGBTQI+ campus work with an emphasis on inviting people we knew to be allies and/or themselves LGBTQI+-identified.
Lessons Learned: The output from the meeting heavily informed the survey, which includes questions about sexual violence without using normative terms for body parts and allows participants to choose "all that apply" for identity questions. Our colleagues reminded us that this work can’t be as neat and tidy as it sometimes seems researchers and statisticians would like.
When we exclude necessary research elements because we do not have the knowledge or are too concerned with whether the data will be publishable (statistical significance, the enemy of robust LGBTQI+ research. Kidding. Sort of.), we are left with results that are largely unreliable. While this shouldn’t hold us back from doing this work, it is incredibly important that we continue to explore ways to ask difficult questions and analyze complex responses in order to truly understand peoples’ lived experiences.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Best of aea365 week: Käri Greene on Issues of Gender and Sexuality in Evaluation
LGBT Week: Käri Greene on Issues of Gender and Sexuality in Evaluation
Liz Zadnik on Bringing passion and enthusiasm to program evaluation
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:29am</span>
|
Hi there! We’re Anjie Rosga, Director, and Natalie Blackmur, Communications Coordinator, at Informing Change, a strategic consulting firm dedicated to increasing effectiveness and impact in the nonprofit and philanthropic sectors. In working with clients large and small, we’ve found that organizations are in a better position to learn if they take the time to prepare and build their capacity to evaluate. To facilitate this process, Informing Change developed the Evaluation Capacity Diagnostic Tool to measure an organization’s readiness to take on evaluation.
Rad Resource: The extent to which evaluation translates into continuous learning is in large part dependent on the organizational culture and level of evaluation experience among staff. These are the two primary categories—themselves divided into six smaller areas of capacity—in the Evaluation Capacity Diagnostic Tool. The tool is a self-assessment survey that organizations can use on their own, in preparation for working with an external evaluator or alongside an external evaluator. A lower score indicates that an organization should, for example, focus on developing outcomes and indicators, track a few key measures or develop simple data collection forms to use over time. The higher the score, the higher the evaluation capacity; staff may then be able to collect more types and a greater volume of data, design more sophisticated assessments, as well as integrate and commit to making changes based on lessons learned.
However, there’s more to the Evaluation Capacity Diagnostic Tool than the summary score. It is a powerful way to catalyze a collective discussion and raise awareness about evaluation. Taking stock and sharing individuals’ perceptions of their organization’s capacity can jumpstart the process of building a culture that’s ready to evaluate and implement learnings.
Hot Tip: Make sure everyone is on the same page. Especially if an organization is inexperienced in evaluation, it’s important to discuss the vocabulary in the Tool and how it compares with individuals’ own definitions.
Hot Tip: Assessing evaluation capacity can be a tough sell. Organizations come to us because they’ve made the decision to begin evaluation, but gauging their capacity to do so can feel like a setback. To get organizations on board, we frame evaluation capacity as an investment in building a learning culture and the infrastructure that can make the most of even relatively limited data collection efforts.
We love to hear from folks who have implemented or reviewed the tool! Feel free to reach out to us at news@informingchange.com.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Michelle Baron on Building a Culture of Assessment
IE Week: Andrew Anderson on The Role of InternalEvaluators inSupporting the Implementation of Evaluation Recommendations
CP TIG Week: Wendi L. Siebold on Where The Rubber Meets The Road: Taking A Hard Look At Organizational Capacity And The Use Of Empowerment Evaluation
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:28am</span>
|
My name is Ann Zukoski and I am a Senior Research Associate at Rainbow Research, Inc. in Minneapolis, Minnesota.
Founded as a nonprofit organization in 1974, Rainbow Research’s mission is to improve the effectiveness of socially-concerned organizations through capacity building, research, and evaluation. Rainbow Research is known for its focus on using participatory evaluation approaches.
Through my work, I am always looking for new tools and approaches to engage stakeholders throughout the evaluation process. Today, I am sharing two methods that I have found helpful.
Rad Resource:
Ripple Effect Mapping [REM] is an effective method for having a large group of stakeholders identify the intended and unintended impacts of projects. In REM stakeholders use elements of Appreciative Inquiry, mind mapping, and qualitative data analysis to reflect upon and visually map the intended and unintended changes produced by a complex program or collaboration. It is a powerful technique to document impacts, and engage stakeholders. Rainbow Research is currently collaborating with Scott Chazdon at the University of Minnesota to use this method to evaluate a community health program impact by conducting REM at two points in time —at the beginning and end of a three-year project. Want to learn more? See http://evaluation.umn.edu/wp-content/uploads/Ripple-Effect-Mapping-MESI13-spring-training-march-2013_KA-20130305.pdf
Hot Tip:
The Art of Hosting (AoH) is a set of facilitation tools, evaluators can use to engage stakeholders and create discussions that count. AoH is a set of methods for working with groups to harness the collective wisdom and self-organizing capacity of groups of any size. The Art of Hosting uses a set of conversational processes to invite people to step in and become fully engaged in the task at hand. This working practice can help groups make decisions, build their capacity and find new ways to respond to opportunities challenges and change. For more information see - http://www.artofhosting.org/what-is-aoh/
Have you used these tools? Let us all know your thoughts!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Ann Zukoski on Participatory Evaluation Approaches
Best of aea365 week: Ann Zukoski on Participatory Evaluation Approaches
Kathy Muhr, Aniko Laszlo and Alexis Henry on Using Concept Mapping to Evaluate Employment Collaboratives for People with Disabilities.
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:27am</span>
|
Hi everyone, I am Jyoti Venketraman, Director Special Initiatives at the New Jersey Coalition Against Sexual Assault. I was hesitant to initially blog as I don’t consider myself an expert. But my awesome colleague Liz Zadnik , aea365 Outreach Coordinator and a recent blog post by Shiela B Robinson, aea365’s Lead Curator made me realize I don’t have to be an expert to contribute and can share my individual lessons learnt! So in that spirit, a project that I am undertaking currently involves collaborating with diverse communities. Evaluation plays a major role as it helps us answer two important questions: Are we making a difference? Are we good stewards of the resources we are using? Below are a few crumbs of knowledge I have learnt in my evaluation journey so far.
Lesson Learned: Communities and individuals value different things from a project or intervention. I learnt this early in my career as an evaluation newbie. I find that when evaluation tools factor in differing stakeholder perceptions on what constitutes a "success," you get a more holistic picture of what the actual impact of a specific project is within that community. This may run counter to stated project objectives but with well-planned and thoughtful stakeholder involvement, you can ensure you capture differing perceptions of "success."
Lesson Learned: History matters. Historical context, historical trauma, and the trajectory of development a community takes can all be critical variables. Some community members may be more aware of it than others. I have learnt that as evaluators we have to be open and intentional in affirming and acknowledging this in our practice.
Lesson Learned: Be open to a variety of data collection methods. One of the reasons I like story telling is because it accommodates diverse views, provides a richer context and gives a window into how communities view the "success or impact "of a specific project.
Rad Resource: Many of my resources come from this blog or from what I have collected in my journey so far. On cultural humility I like Cultural Humility: People, Principles & Practices by Vivian Chavez (2012)
Rad Resource: On context in evaluation I like Participatory research for sustainable livelihoods from International Institute for Sustainable Development
Rad Resource: On storytelling I like CDC’s resource on Telling Your Program’s story, The California Endowment‘s resource on Storytelling Approaches to Program Evaluation and the Center for Digital Storytelling .
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Sharon Smith-Halstead on Storytelling in Evaluation
René Lavinghouze on Sharing Your Program’s Story
AKEN Week: Amelia Ruerup on Understanding Indigenous Evaluation in an Alaskan Context
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:26am</span>
|
"Creativity is intelligence having fun", Albert Einstein.
Greetings! I’m Sara Vaca, independent consultant (EvalQuality.com) and recently appointed Creative Advisor of this blog. To start contributing I thought of writing some posts about how creativity intertwines with evaluation. This is Part II of a two-part post. You can find Part I here.
Lesson Learned: Evaluation is a rigorous, systematic trans discipline. However, evaluators can (and already) use creativity to improve their practice in many different moments and levels.
Here are many examples, just digging in our aea365’s archives:
Hot Tips:
1. Making the most of new available tools
Here are some clever examples:
Susan Kistler on Padlet: A Free Virtual Bulletin Board and Brainstorming Tool
Miki Tsukamoto on Using Video as a Tool to Capture Baseline Surveys
Sarah Rand, Amy Cassata, Maurice Samuels and Sandra Holt on iPad Survey Development for Young Learners
David Fetterman on Google Glass Part II: Using Glass as an Evaluation Tool
Jessica Manta-Meyer, Jocelyn Atkins, and Saili Willis on Creative Ways to Solicit Youth Feedback
Cindy Banyai on Creative Tech Tools for Participatory Evaluation
2. Disseminating results
We have plenty of examples within the Data Visualization and Reporting TIG. Here are some:
Megan Greeson and Adrienne Adams on Multimedia Reports
Susan Kistler on Innovative Reporting Part III: Taking It to the Streets
Elissa Schloesser How to Make Your Digital PDF Report Interactive and Accessible
Susan Kistler on a Free Tool for Adding Interactivity to Online Reports: Innovative Reporting Part IV
Kat Athanasiades on Get Graphic for Better Conversation Facilitation: Graphic Recording at Evaluation 2013
Rakesh Mohan, Lance McCleve, Tony Grange, Bryon Welch, and Margaret Campbell on Sankey Diagrams: A Cool Tool for Explaining the Complex Flow of Resources in Large Organizations
3. Learning about Evaluation
When it comes to our own learning, there is also room for new things. Here some ideas:
Jayne Corso on Why Blogging is so Important
Bloggers Series: Chris Lysy on Fresh Spectrum
Petra-Chambers-Sinclair on Biohacking: a New Hobby for Your Evaluative Mindset
We would love to hear how YOU are using creativity in your evaluation work.
Please consider contributing your own aea365 post! (sara.vaca@EvalQuality.com)
More about creativity and evaluation coming soon!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Sara Vaca on Creativity and Evaluation Part I
Sheila B Robinson on Introducing our Newest aea365 Team Members
DVR Week: Ann Emery on the Dataviz Hall of Fame for Evaluation
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:25am</span>
|
Hi again - Leslie Goodyear, Jennifer Jewiss, Janet Usinger, and Eric Barela, the co-leaders of the AEA Qualitative Methods TIG, back with another lesson we learned as we co-edited a book that explores how qualitative inquiry and evaluation fit together. Our last blog focused on the five elements of quality in qualitative evaluation. Underpinning these five elements is a deep understanding and consideration of context.
Lesson Learned: Context includes the setting, program history, and programmatic values and goals. It also includes the personalities of and relationships among the key stakeholders, along with the cultures in which they operate. In their chapter on competencies for qualitative evaluators, Stevahn and King describe this understanding as a sixth sense.
Lesson Learned: Understanding context was one of the driving forces in the early adoption of qualitative inquiry in evaluation. In their book chapter, Schwandt and Cash discuss how the need to explain outcomes - and therefore better understand program complexities and the experiences of participants - drove evaluators to employ qualitative inquiry in their evaluations.
Lesson Learned: Understanding context is not always highlighted in descriptions of high quality evaluations, perhaps because it is a basic assumption of effective evaluators who use qualitative inquiry in their practice.
Rad Resource: Further discussion about the importance of understanding context appears in several chapters of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass), an edited volume featuring many of our field’s experts on qualitative evaluation.
The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
QUAL Eval Week: Leslie Goodyear, Jennifer Jewiss, Janet Usinger and Eric Barela on Qualitative Inquiry in Evaluation
QUAL Eval Week: Eric Barela on providing a detailed description of qualitative inquiry choices and processes to clients
QUAL Eval Week: Leslie Goodyear on The Importance of Asking "Stupid Questions" in Qualitative Evaluation
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:25am</span>
|
My name is Michael Quinn Patton and I am an independent evaluation consultant. Development of more-nuanced and targeted purposeful sampling strategies has increased the utility of qualitative evaluation methods over the last decade. In the end, whatever conclusions we draw and judgments we make depend on what we have sampled.
Hot Tip: Make your qualitative sampling strategic and purposeful — the criteria of qualitative excellence.
Hot Tip: Convenience sampling is neither purposeful nor strategic. Convenience sampling means interviewees are selected because they happen to be available, for example, whoever happens to be around a program during a site visit. While convenience and cost are real considerations, first priority goes to strategically designing the sample to get the most information of greatest utility from the limited number of cases selected.
Hot Tip: Language matters. Both terms, purposeful and purposive, describe qualitative sampling. My work involves collaborating with non-researchers who say they find the term purposive academic, off-putting, and unclear. So stay purposeful.
Hot Tip: Be strategically purposeful. Some label qualitative case selection "nonprobability sampling" making explicit the contrast to probability sampling. This defines qualitative sampling by what it is not (nonprobability) rather than by what it is (strategically purposeful).
Hot Tip: A purposefully selected rose is still a rose. Because the word "sampling" is associated in many people’s minds with random probability sampling (generalizing from a sample to a population), some prefer to avoid the word sampling altogether in qualitative evaluations and simply refer to case selection. As always in evaluation, use terminology and nomenclature that is understandable and meaningful to primary intended users contextually.
Hot Tip: Watch for and resist denigration purposeful sampling. One international agency stipulates that purposeful samples can only be used for learning, not for accountability or public reporting on evaluation of public sector operations. Only randomly chosen representative samples are considered credible. This narrow view of purposeful sampling limits the potential contributions of strategically selected purposeful samples.
Cool Trick: Learn purposeful sampling options. Forty options (Patton, 2015, pp. 266-272) mean there is a sampling strategy for every evaluation purpose.
Lesson Learned: Be strategic and purposeful in all aspects of evaluation design, including especially qualitative case section.
Rad Resources:
Patton, M.Q. (2014) Qualitative inquiry in utilization-focused evaluation. In Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice.Jossey-Bass, pp. 25-54.
Patton, M.Q. (2015) Qualitative Research and Evaluation methods, 4thSage Publications.
Patton, M.Q. (2014) Top 10 Developments in Qualitative Evaluation for the Last Decade.
The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
QUAL Eval Week: Michael Quinn Patton on Qualitative Inquiry in Utilization-Focused Evaluation
Nicole Vicinanza on Explaining Random Sampling to Stakeholders
Best of aea365 week: Nicole Vicinanza on Explaining Random Sampling to Stakeholders
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:24am</span>
|
My name is Awab and I am working as Monitoring & Evaluation Specialist with the Higher Education Commission (HEC), Islamabad.
To gauge the learning of a training is always a challenge. Recently, we faced this challenge when the HEC conducted the training of about 1600 top managers of Pakistani universities. The trainings were conducted through some implementation partners (IPs). We asked the IPs to conduct pre and post-training tests so that we now how much the participants could learn from these trainings. The IPs conducted the pre & post-tests. They analyzed the data and told the difference between the scores in pre-tests and post-tests. Since the post-test scores are always greater than the pre-test scores (in some of our cases, more than 100%) , the analysis painted a rosy picture of the trainings and everything looked fine (as shown in figure 1).
Figure 1: Comparison of Pre & Post-tests, shared by one of the IPs.
As the training reports were passed on to the M&E Unit, we rejected the analysis, because it did not give us sufficient information to know the quality of training and plan for the future.
Hot Tips: We started with asking the right questions. We told the IPs that, from the pre & post-tests analyses, we were rather interested in knowing the answers to three questions: (i) what was the pre-existing learning level of the participants?; (ii) what is the net learning attributable to the training?; and (iii) what is the learning gap we need to bridge in future training?
Cool Tricks: The answers to the three question could be given by analyzing the pre & post test scores in a very simple manner and putting the data in a stacked bar chart. We developed a model for analysis and shared it with the IPs. The results were surprisingly interested. The model gave a clear picture of the pre-existing learning, net learning and the learning lag. Thus, we were able not only to appreciate the IPs for the net learning attributable to them but also hold them accountable for the learning gap and plan for the future training.
Figure 2: Learning-based Model of Pre & Post-tests analysis.
Lessons Learned:
In evaluations, it is always good to ask yourself how you are going to use the data. Asking the right questions is half the solution.
For further details on how to gauge learning in a training and downloading the Excel sheets for data analysis on the given model, please click on the following links:
https://www.scribd.com/doc/269185779/The-Level-2-of-Training-Evaluation
https://www.scribd.com/doc/269198539/Level-2-Evaluation-of-Training-LBAT-Model-Case-Study-Workbook
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:22am</span>
|
Make Me Smarter E-Nor Helps Marketers Navigate Seas of DataBy Michelle Wallace
Dec 3, 2013
E-Nor is a digital analytics and marketing agency that has provided consulting to an impressive list of companies including VMware, ADP, Sony, and MIT. Originally founded in 2003, the agency works hard to help clients navigate what Feras Alhlou—E-Nor's Co-Founder and Principal Consultant—calls the "ocean of data" now available to businesses of all sizes.
"You don't have to be a Fortune 100 today to be overwhelmed with data," says Alhlou, in this video.
Read moreRead more about E-Nor Helps Marketers Navigate Seas of Data
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:20am</span>
|
Make Me Smarter Data-Driven Teamwork Drives Millions in Revenue for Insurance Leader SpareBank 1By Michelle Wallace
Dec 4, 2013
SpareBank 1 Forsikring, a Norwegian insurance leader, knows—as the adage goes—that talent wins games but teamwork wins championships. The $114 billion group is actually a collective of smaller savings banks that together provide service across the regions of Norway.
And since it's crucial to understand their myriad customers across the country, SpareBank 1 staff encounter lots of data. This customer story explains why the company predicts that Tableau will help them bring in an extra $40 million in just three years.
Read moreRead more about Data-Driven Teamwork Drives Millions in Revenue for Insurance Leader SpareBank 1
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:19am</span>
|
Make Me Smarter
Lightning Round Tips and Tricks - Tableau 8.2 EditionBy Ross Perez
Jul 15, 2014
Tableau 8.2 contains two important new features that will make your work with Tableau easier and more effective. Watch the video to see how:
Read moreRead more about Lightning Round Tips and Tricks - Tableau 8.2 Edition4 commentsAdd new comment
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:19am</span>
|
Entertain Me
July Viz RoundupBy Ross Perez
Jul 16, 2014
One of the things we like the most about our customers is that they share a key passion of ours: telling stories with data. This passion is also shared by the thousands of Tableau Public authors that publish data stories to the web every day. The best place to see these stories is to subscribe to Viz of the Day; today we’ve taken a selection of the best data stories over the past several months to share with you.
Read moreRead more about July Viz Roundup2 comments
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:19am</span>
|
Make Me Smarter
Tips and Tricks From TC14By Ross Perez
Sep 16, 2014
Some of the more interesting and practical sessions at TC14 were the Rapid Fire Tips and Tricks sessions led by Daniel Hom. Don't worry if you missed them: we're bringing two of the most interesting and useful tricks straight to you today.
Read moreRead more about Tips and Tricks From TC149 commentsAdd new comment
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:18am</span>
|
Entertain Me
October Viz Round-UpBy Ross Perez
Oct 1, 2014
It’s time for the Viz Roundup! Every several months we look back on Viz of the Day and find visualizations that were especially meaningful, important or influential to share with you. This Roundup is particularly special because the majority of our featured vizzes have taken advantage of the new Story Points feature recently released in Tableau 8.2, which makes it easier to tell a narrative story with data. Take a look for yourself.
Read moreRead more about October Viz Round-Up3 commentsAdd new comment
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:18am</span>
|
Make Me Smarter
Extending Enterprise Security with Kerberos Support: Now in Beta!By Neelesh Kamkolkar
Oct 20, 2014
Kerberos is coming to Tableau very soon. Many of you have asked for Kerberos support to provide single sign-on from the client all the way to the database. Well, I’m excited to announce the beta of Tableau 8.3, which delivers support for Kerberos for Microsoft SQL Server, Microsoft Analysis Server, and Cloudera Impala.
Tableau already supports enterprise class security and authentication mechanisms like integration with Active Directory and Identity Management providers with SAML. In addition, Tableau Server supports native authentication for smaller teams that want to use Server out of the box. With the release of 8.3, we are extending that flexibility further to include support for Kerberos.
Read moreRead more about Extending Enterprise Security with Kerberos Support: Now in Beta!8 commentsAdd new comment
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:17am</span>
|
Keep Me in the Loop
Announcing Tableau 8.3 Kerberos SupportBy Neelesh Kamkolkar
Dec 1, 2014
Product Manager Neelesh Kamkolkar announces the launch of Tableau 8.3 which extends Enterprise Security with Support for Kerberos.
Read moreRead more about Announcing Tableau 8.3 Kerberos Support1 commentAdd new comment
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:16am</span>
|
Make Me Smarter
LRTT: Optimizing Difficult Data for Shared AnalysisBy Ross Perez
Dec 2, 2014
Customers will often tell us that they like using Tableau to drag and drop and create visualizations, but their database is difficult and messy. It could be slow, hard to understand, or challenging to use, but either way the people who need to analyze data don't feel comfortable connecting to the database and using the data within it.
Read moreRead more about LRTT: Optimizing Difficult Data for Shared Analysis3 commentsAdd new comment
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:15am</span>
|
Entertain Me
December Viz RoundupBy Lucas Stewart
Dec 2, 2014
It’s time for the Viz Roundup! Every so often we look back fondly on our Viz of the Day feed and find vizzes that were especially meaningful, important, or entertaining to share with you. The new Story Points feature recently released with Tableau 8.2 has made it much easier to narrate events with data. Feast your eyes on these impressive ways members of our community have used data to tell a story.
Read moreRead more about December Viz RoundupAdd new comment
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:15am</span>
|
Make Me Smarter
Tableau 9.0 Preview: Stay in the flow with Auto Data PrepBy Marc Rueter
Jan 22, 2015
Analytics isn't just for pretty data. Tableau 9.0, currently in beta, automates cleaning up messy data, especially Excel spreadsheets. This includes The Tableau Data Interpreter to automatically identify the structure of an excel file, new tools to pivot and split data, and a new layout to quickly operate on metadata. Together with the Automatic Data Modeling that was released in 8.2, these new features help you quickly get your data ready for analysis.
Read moreRead more about Tableau 9.0 Preview: Stay in the flow with Auto Data Prep11 commentsAdd new comment
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:14am</span>
|
Keep Me in the Loop
Tableau 9 Beta is Out!By Ellie Fields
Jan 12, 2015
Update: We are excited to announce that Tableau 9.0 is now available.
Many of you began receiving emails Friday night letting you know that the Tableau 9.0 beta is out.
Several of the features in Tableau 9.0 were shown at the 2014 Tableau Conference. You can see the keynote presentation from TC14 here to get an idea of what’s coming out. There’s much more on the beta than we had time to show at the conference.
Read moreRead more about Tableau 9 Beta is Out!82 commentsAdd new comment
Tableau Business Intelligence
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 05:13am</span>
|