Blogs
Hello, I’m Katherine Haugh, an evaluator at Innovation Network—a nonprofit consulting firm based in Washington DC. What is the best way that you retain and recall information? Do text-heavy reports speak to you? Or do you prefer images, infographics and colorful fonts?
I recently attended AEA’s 2014 conference in Denver, Colorado. While I was there, I took notes during the panels, presentations and workshops that I attended and uploaded them on my Twitter handle and website. Some of you expressed interest in learning about how I take my notes. Below I’ve listed some "how to" tips for taking visual notes that can be customized to your own personal style.
Step 1: Find the right mix for you (and your audience)
There are many different techniques for taking notes. For me, a plain white sheet of paper with an equal balance of images and text allows me to quickly recall what I wrote down.
Hot Tip: Organization is key. No matter the format, the more organized your notes are—the higher likelihood that you’ll remember what you wrote. The goal is to keep your notes short, but have enough triggers in key words, images or symbols to jog your memory when you look them over.
Step 2: Get the right tools
The best way to learn what technique works for you is to try out different options. Test out different color combinations, spacing, and symbols. Maybe too much color is overwhelming for you or you prefer stars to arrows. I like to stick with a darker base color and add punches of bright color for important items.
Hot Tip: If you prefer hand-written notes, stay away from inky pens—they smear easily! You don’t have to be an artist to be a brilliant note taker. The most important thing is to create notes that capture what’s most important in an organized, succinct, and readable fashion.
Step 3: Make it personal
Without question, one of the most effective ways to retain information is to make it personally significant to you. For example, drawing a picture of swirl might indicate that you were confused by something covered in a presentation. This is helpful when you need to quickly scan through your notes to find the parts you want to pay the most attention to. I often incorporate dorky jokes into my notes to help me recall what I wrote.
Hot Tip: Write your own notes in your own words. Try to stay away from copying down verbatim from a PowerPoint slide. Ideally, you want your notes to supplement a presentation—not copy it.
Questions, suggestions or feedback? Please reach out. I’d love to hear from you!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Scribing: Susan Kistler on Using Graphic Recorders
Kat Athanasiades on Get Graphic for Better Conversation Facilitation: Graphic Recording at Evaluation 2013
Rena Matthews on Getting Names Right
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:29am</span>
|
Hello Loyal aea365 readers! I’m Sheila B. Robinson, aea365’s Lead Curator and sometimes Saturday contributor with one question for you: What is it that YOU would like to read about on this blog?
Lesson Learned: AEA365 has been going steadily since January 1, 2010 with 1800+ contributions from hundreds of evaluators across the globe. We accept individual submissions at aea365@eval.org on a rolling basis, along with inquiries about sponsored or themed weeks. Posts are about any and all evaluation-related topics, and anyone with something to share with fellow evaluators is welcome to contribute! If you are interested in sharing a tip, please be sure to check out our contribution guidelines here.
2015 has been declared the International Year of Evaluation and we suspect we’ll be hearing quite a lot about that in the coming weeks and months. The aim of designating 2015 as the International Year of Evaluation is to advocate and promote evaluation and evidence-based policy making at international, regional, national and local levels.
As a key learning tool for evaluation, aea365 can also be a fabulous vehicle for promoting evaluation and evidence-based policy. With that in mind, we would like to include your voice as we head into the new year as our aea365 team considers inviting authors and groups to contribute.
Hot Tip: Let’s crowdsource some ideas for aea365 in 2015 and make it the best year ever.
Please let us know what you would like to see in aea365 by responding to these questions in the comments:
1. What do YOU want to read or learn more about on aea365 in 2015?
2. Who do YOU want to hear from on this blog?
Thanks very much for your input and your loyal readership.
Happy New Year!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Sheila B Robinson on EvalYear: A Taste of 2015 and a bit of Alphabet Soup to Whet Your Appetite
Neha Karkara on the Evaluation Advocacy Toolkit
LAWG Week: Brian Yoder on Evaluators Visit Capitol Hill
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:29am</span>
|
¡Salud! and Seasons Greetings! We are Lisa Aponte-Soto and Saúl I. Maldonado, AEA GEDI alumni and co-chairs of LA RED, the Latina/o Responsive Evaluation Discourse Network. Aponte-Soto serves as National Program Deputy Director of RWJF New Connections at the OMG Center for Collaborative Learning. Maldonado is Adjunct Faculty at Santa Clara University’s School of Education and Counseling Psychology.
LA RED is a network of Latina/o AEA members driven by a collective mission of transformative leadership to: (1) increase the representation, engagement, and leadership of Latina/os in evaluation; and, (2) develop professional discourse for Latina/o Responsive Evaluation (LRE) theory, methods, and practices. Today, LA RED launches an aea365 week on Latina/o Responsive Evaluation Discourse. Below are our reflections on this week’s lessons learned, rad resources and hot tips from LA RED membership: Art Hernandez, Lisa Aponte-Soto, Wanda Casillas, Leah Christina Neubauer, Josie Serrata, Martha Hernandez, Grisel Robles-Schrader, Maria Jimenez, and Andrea Guajardo.
Lesson Learned on Defining LRE Practices:
Acknowledge Pan-ethnicity. Latina/o evaluators represent diverse cultural backgrounds and perspectives. As a pan-ethnic term, Latina/o, has social and political benefits and boundaries. One benefit of ethnic aggregation is the allocation of resources; one boundary is the possible blurring of specific ethnic identities. We caution that Latina/os are not a homogeneous group, and self-identification as Latina/o evaluators does not qualify as expertise for all Latina/o subgroups or communities.
Attend to language and culture. Evaluators learning from, with and for Latina/o communities must respect language and cultural nuances in the customization of evaluation design and data collection instruments.
Learn from History. It is important to recognize that historically evaluation practices of Latina/o communities in the U.S. have marginalized and created barriers to the perception and subsequent conduct of evaluation.
Engage Community. Inclusion demands a paradigm shift from evaluations of Latina/o communities to evaluations with/for Latina/o communities. This calls for prioritizing respect when collaborating with Latina/o communities and advocating for participatory methods, from co-creating instruments and reviewing data together to amplifying our avenues for sharing findings.
Participate in Dialogue. Culturally responsive evaluations with/for Latina/os requires continuous dialogue that will be negotiated and renegotiated based on the contexts of identity, locality, geography, migration, country of origin, language, and much more.
Apply LatCrit. Drawing on critical race theory, we need to expand to frameworks rooted in Latino-based practices like LatCrit and question the utility of prescriptive linear and sequential logic models as the primary, and often only, pathway for evaluation design.
Championing LRE. To enact these practices, recruiting, retaining and training Latina/o leadership in evaluation as well as creating the conditions for a community of practice for all colleagues conducting evaluations with/for Latina/o communities serve as distinctive, yet complementary, goals.
LA RED looks forward to learning from/with you.
The American Evaluation Association is celebrating Latina/o Responsive Evaluation Discourse Network Week. The contributions all this week to aea365 come from LA RED Network members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Cultural Competence Week: Lisa Aponte-Soto on the Benefits of Networks for Building Culturally Responsive Practices
Cultural Competence Week: Lisa Aponte-Soto and Leah Christina Neubauer on Increasing the AEA Latino/a Visibility and Scholarship
Cultural Competence Week: Osman Özturgut and Lisa Aponte-Soto on Sustainability of Cultural Competency in Evaluation
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:29am</span>
|
My name is Art Hernandez and I am a Professor and Dean at Texas A&M University Corpus Christi. I teach and practice research and evaluation, have participated in the AEA Minority Serving Institution initiative also serving as Director. As a supporting member of the newly formed LA RED network, this post focuses on Latino Cultural Competence.
Lesson Learned: The usual assumption that people from particular cultural backgrounds (especially Latino/a) are by nature culturally competent to work with that population is often not well founded.
Hot Tip: While it is clear that individuals’ likelihood of cultural competence is greater when those individuals share a cultural background with those who participate in an activity to be evaluated, it should not be assumed that this competence is guaranteed by virtue of that common background. This is true for multiple reasons, a few of which are presented here for consideration:
Experience: Difference of experience is a great contributor to perspective; values and differences are extremely likely across generations even within cultural groups.
Cultural identity: In addition, it is likely that in some situations, differences in education, economic background, etc. have an impact on cultural identity to some demonstrable degree even if there is an overlap in experience during some previous stage of life (e.g. childhood poverty, etc.).
Language: Differences in language fluency and usage between the evaluator and the evaluand are likely given differences in education among other things. These differences must be examined even when evaluators are "fluent" since even among fluent speakers differences in vocabulary, slang, etc. can often be geographically based and it is clear that language is one of the main indicators of culture and often the principal source of or medium for conduct of the "data" considered in evaluation exercises.
Lessons Learned: Basic principles of cultural competence: It should be clear and a matter of usual operation that the basic principles of cultural competence be applied in all situations regardless of the apparent background of the evaluator or the ability of that individual to provide an insight into the community about and from which information will be collected.
Common cultural competence elements are essential: A detailed self-assessment, direct engagement with and assessment of the community to determine values, perspectives, history, and objectives. All of these are germane and of potential influence. As such, they must be considered in the crafting of data collection and interpretive, analytical mechanisms.
Rad Resource: Read Del Prado et al.’s Culture, Method and Content of Self Concepts: Testing Trait, Individual-Self-Primacy and Cultural Psychology Perspectives.
The American Evaluation Association is celebrating Latina/o Responsive Evaluation Discourse Network Week. The contributions all this week to aea365 come from LA RED Network members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Scribing: Vidhya Shanker on Discussions Regarding the AEA Cultural Competence Statement
CC Week: Jori Hall on Integrating Cultural Competence into Everyday Practice, Part 1
MSI Fellowship Week: Art Hernandez on AEA’s Minority Serving Institution (MSI) Fellowship Experience
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:28am</span>
|
?Saludos! We are Lisa Aponte-Soto and Wanda Casillas, AEA GEDI alumni from LA RED network. Aponte-Soto serves as National Program Deputy Director of RWJF New Connections at the OMG Center for Collaborative Learning. Casillas is a postdoctoral scholar with the National Center for Institutional Diversity at the University of Michigan.
The growing Latino presence in the United States demands a pool of Latina/os to support culturally responsive consultancy efforts; however, Latina/o evaluators are disproportionately underrepresented in the field. LA RED is commitment to helping consultancies meaningfully engage in evaluation practices with a culturally responsive lens that attends to and unpacks the heterogeneity of Latina/o communities. The following highlights lessons learned from LA RED’s Evaluation 2014 Think Tank session on pathways for building a pipeline of Latina/o evaluators.
Lessons Learned:
Identify partners - To increase the representation of Latina/os in evaluation, it is necessary to collaborate with cross-cultural partners currently conducting Latina/o-specific evaluation, understand community needs, and learn from successes and challenges of evaluations conducted in a variety of Latina/o communities.
Build evaluation capacity of Latina/o communities - LA RED should work closely with Latina/o community-based organizations tasked with conducting program evaluations to identify potential partners, evaluation practice opportunities and professional development needs for Latina/o evaluators.
Create support systems - Seasoned Latina/o evaluators and cross-cultural partners are needed to serve as padrinos and madrinas to provide mentoring, coaching, and sponsorship to emerging Latina/o evaluators in communities of practice.
Facilitate thought leadership - Pláticas or discussions are critical to promoting Latina/o Responsive Evaluation (LRE) discourse and developing a culture of co-learning with communities and evaluators’ practices to advance the field.
Develop an outreach strategy - Outreach is instrumental to promoting the array of resources offered to support LRE. Additionally we should promote the role of evaluation in Latina/o communities with local affiliates and graduate programs in geographic areas with higher Latina/o penetration.
Create opportunities for professional development - Fostering a cadre of Latina/o evaluators extends beyond the support systems offered through networks, mentorship, coaching, and sponsorships to professional development traineeships and internships. Programs like the AEA GEDI and MSI offer educational preparation on LRE practices grounded in critical race theory but only offer a limited number of seats on an annual basis. Additional opportunities need to be established.
Rad Resource #1: LA RED is partnering with the Latina Researchers Network (LRN) to provide webinar trainings on LRE practices. Visit the LRN for information on the first webinar series, scheduled for February 2015.
Rad Resource #2: The Center for Latino Community Health Evaluation and Leadership Training offers fellowship opportunities for consultants conducting Latina/o health related evaluation. The Center is hosting its annual Latino Health Equity conference in April 2015.
The American Evaluation Association is celebrating Latina/o Responsive Evaluation Discourse Network Week. The contributions all this week to aea365 come from LA RED Network members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Cultural Competence Week: Lisa Aponte-Soto on the Benefits of Networks for Building Culturally Responsive Practices
LA RED Week: Lisa Aponte-Soto and Saúl I. Maldonado on Charting the Course for Latina/o Leadership in Evaluation
Cultural Competence Week: Lisa Aponte-Soto and Leah Christina Neubauer on Increasing the AEA Latino/a Visibility and Scholarship
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:28am</span>
|
We are Dr. Maria Jimenez, Independent Evaluation Consultant in Los Angeles, CA, and Andrea Guajardo, MPH, Director of Community Health at CHRISTUS Santa Rosa Health System in San Antonio, TX. As supporting members of the newly formed LA RED network, this post focused on the consideration of critical race theory as a practical application in Latino/a Responsive Evaluation.
Latino/a Responsive Evaluation often requires a participatory research approach, and in doing so, provides opportunity to conduct evaluations within the context of a critical framework. Cultural intuition is a valuable tool to mitigate race, ethnicity, and culture in community-based programs focused on Latino/a populations.
Hot Tip #1: Acknowledge issues of race/racism within the evaluation context.
Latinos are a heterogeneous group representing various genetic populations including indigenous African American, European, and American. Evaluators need to understand the subtle and not-so-subtle differences between these racial populations. Critical race theory as a methodological approach can be used to address issues of race in Latino/a Responsive Evaluation. Critical race theory places race at the forefront of research, use an interdisciplinary, participatory approach, and promotes a social justice agenda.
Hot Tip #2: Be aware of immigration and migration trends within the population of focus in the evaluation.
According to the 2012 Statistical Portrait of Hispanics in the United States from the Pew Research Center, 52.9 million people in the United States self-identified as either Hispanic or Hispanic and one other race. Of these, nearly 19 million are foreign-born with Mexico as the most common country of origin (64.2%). Fifty-two percent of the Hispanic children in the United States are considered 2nd generation. Pew data also shows significant differences in language, news and information acquisition preferences, and political and cultural opinions based on the length of time a person has lived in the, whether a person is undocumented or is a first, second, or third generation American citizen.
Hot Tip #3: Diversify your evaluation team.
Build a transdisciplinary team with diverse sociological, historical, or practical perspectives. For example, team members with strong backgrounds in Urban Chicano studies will provide a different lens than that of a team member with experience rural Tejano culture. Acknowledge the intra-racial complexities of the Latino/a population and ensure that multiple viewpoints exist on your evaluation team. Additionally, ensure that members of your team various ethnic/racial backgrounds and/or have a solid understanding of the culture of the program, participants, or communities being studied.
Rad Resources:
Pew Research Hispanic Trends Project
Parker’s Commentary: Can Critical Theories of or on Race Be Used in Evaluation Research in Education.
The American Evaluation Association is celebrating Latina/o Responsive Evaluation Discourse Network Week. The contributions all this week to aea365 come from LA RED Network members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
LA RED Week: Lisa Aponte-Soto and Wanda Casillas on Pathways for Building a Cadre of Latina/o Evaluators
Cultural Competence Week: Lisa Aponte-Soto and Leah Christina Neubauer on Increasing the AEA Latino/a Visibility and Scholarship
LA RED Week: Lisa Aponte-Soto and Saúl I. Maldonado on Charting the Course for Latina/o Leadership in Evaluation
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:28am</span>
|
I am Leah Christina Neubauer with DePaul University’s MPH Program. I serve as President of the Chicagoland Evaluation Association (CEA). This post highlights my paper in the LA RED network’s AEA 2014 session (#1439): Visionary Evaluation for Building Sustainable Cultural Responsive Evaluation Practices for Latino/a Communities.
From my paper, Lessons from Little Village, Public Health & LatCrit, I offer three guiding questions:
When conducting evaluation with and within Latino communities, does the setting matter?
Should the evaluator be Latino?
Should the design and methods be Latino-focused?
In response to my queries, I offer LatCrit as a framework for advancing Latino-focused evaluation dialogue and scholarship. I offer key insights and resources below.
Hot Tip: Definition, Aims & Values
What is LatCrit? LatCrit is a theory which considers issues of concern to Latinas/os such as immigration, language rights, bi-lingual schools, internal colonialism, sanctuary for Latin American refugees, multi-identity, and census categories for "Hispanics".
Aims: Aligned with critical legal theory roots, initial aims included:
The production of critical and interdisciplinary knowledge
The promotion of substantive social transformation
The expansion and interconnection of anti-subordination struggles, and
The cultivation of community and coalition among outsider scholars.
Values: These aims are couched with a commitment to:
Expansive practical programming
Vast community-building structures
Continual engagement of one’s self-critique, and
Analysis to ensure multidimensionality.
Lessons Learned: Possibilities for Evaluators and Evaluation
Multiple Evaluation Approaches: For evaluators, LatCrit highlights theories and models which evoke use, participation, responsiveness, culture, indigenous peoples, social justice, and transformation.
Resources: In my health work, ample time and money are essential for quality LatCrit-aligned evaluation. Evaluators (or RFP writers!) should allow time for formative or developmental processes. Many of our Latino communities have untold processes, stories, and phenomena that must be told and appropriately captured.
Multidimensionality: To be clear, multidimensionality is valued, understood, and shapes all evaluation processes. In practice, this includes resources to develop a nuanced understanding of key multidimensional issues such as: history, context, community, stakeholders, language, dialect, power structures, etc.
Latino-Focused Evaluator Roles: Are context-sensitive, interpreters, translators, mediators, and storytellers. They are grounded in an international, contextual perspective in evaluation. They are familiar with the community’s geographic and historical background. They bear cultural and linguistic competency. They practice multidisciplinary methodology and embody responsive and power-aware evaluation practice.
Rad Resources:
Clayson, Castañeda, Sanchez, and Brindis’ Unequal power—changing landscapes: Negotiations between evaluation stakeholders in Latino communities.
Mertens, and Wilson’s Program evaluation theory and practice: A comprehensive guide.
Valdes’ Foreword: Under construction. LatCrit consciousness, community, and theory.
The American Evaluation Association is celebrating Latina/o Responsive Evaluation Discourse Network Week. The contributions all this week to aea365 come from LA RED Network members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
LA RED Week: Lisa Aponte-Soto and Saúl I. Maldonado on Charting the Course for Latina/o Leadership in Evaluation
Cultural Competence Week: Lisa Aponte-Soto and Leah Christina Neubauer on Increasing the AEA Latino/a Visibility and Scholarship
LA RED Week: Maria Jimenez and Andrea Guajardo on Practical application of Critical Race Theory in Latina/o Responsive Evaluation
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:27am</span>
|
Hi all! Liz Zadnik here, aea365 Outreach Coordinator and occasional Saturday Contributor. I wanted to share some insights and reflection I had as the result from a recent EVALTALK discussion thread. Last month, someone posed the following request:
I’m searching for a "Why Evaluate" article for parents/community members/stakeholders. An article that explains in clear and plain language why organizations evaluate (particularly schools) and evaluation’s potential benefits. Any suggestions?
Rad Resources: Others were kind enough to share resources, including this slideshare deck that moves through some language and reasoning for program evaluation and assessment, book recommendations There is also a very helpful list from PlainLanguage.gov offering possible replacements for commonly-used words. (Even the headings - "Instead of…" and "Try…" - make the shift seems much more manageable).
Lessons Learned: Making evaluation accessible and understandable requires tapping into an emotional and experiential core.
Think about never actually saying "evaluate" or "evaluation." It’s OK not to use phrases or terms if they are obstacles for engaging people in the evaluation process. If "capturing impact," "painting a picture," "tracking progress" or any other combination of words works…use it! It may be helpful to talk with interested or enthusiastic community members about what they think of evaluation and what it means to them. This helps gain insight into relevant language and framing for future discussions.
Have the group brainstorm potential benefits, rather than listing them for them. Similar to engaging community members in discussion of the "how" is also asking them what they feel is the "why" of evaluation. I have heard the most amazing and insightful responses when I have done this with organizations and community members. Ask the group "What can we do with the information we get from this question/item/approach?" and see what happens!
Evaluation is about being responsible and accountable. For me, program evaluation and assessment is about ethical practice and stewardship of resources. I have found community members and colleagues receptive when I frame evaluation as a way to make sure we are doing what we say we’re doing - that we are being transparent, accountable, and clear on our expectations and use of funds.
We’d love to hear how others in the aea365 readership are engaging communities in accessible conversations about evaluation. Share your tips and resources in the comments section!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Angela Fitzgerald on Engaging the Community - an Evaluator’s Perspective
Michelle Baron on Building a Culture of Assessment
Bloggers Series: Charles Gasper on The Evaluation Evangelist
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:26am</span>
|
Happy New Year! We’re Leslie Goodyear, Jennifer Jewiss, Janet Usinger, and Eric Barela, the co-leaders of the AEA Qualitative Methods TIG. All four of us are practicing evaluators with a passion for bringing the stories and experiences of evaluation stakeholders to the fore. For the past few years, as part of editing a book together, we have been exploring how qualitative inquiry and evaluation fit together and how to identify good practice in qualitative evaluation. What are criteria for quality in qualitative evaluation? As we were reviewing chapter drafts for the book, we had to come up with a way to determine the quality of the work represented by the authors. After many conference calls and email conversations, we developed a model for thinking about the elements of quality in qualitative evaluation.
Lesson Learned: High quality qualitative evaluation is grounded in a cyclical and reflective process that is facilitated by the evaluator. The following five elements make up the process:
The evaluator first must bring a clear sense of personal identity and professional role to the process. It’s a matter of understanding who you are, what you know, and what you need to learn.
The evaluator needs to engage stakeholders and build trusting relationships from the outset and throughout the evaluation.
High quality evaluation relies on sound methodology, systematically applied, that is explicitly shared with stakeholders.
Conducting quality evaluation can only be accomplished by remaining "true" to the data; in other words, hearing participants as they are, not how the evaluator wants them to be.
Skillful facilitation of the process by the evaluator results in learning by all involved.
Lesson Learned: The elements in the model are not necessarily progressive or discrete. All the elements are at play in an evaluation and may cycle back on each other, interact with each other, or occur in a completely different order.
Lesson Learned: Although we don’t explicitly call out context as an element of the model, a strong, dynamic understanding of context is critical grounding for all high quality qualitative inquiry. Thus, context is embedded in all the elements in the model.
Rad Resource: More about this model and stories from our own practice of qualitative inquiry in evaluation can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).
The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
QUAL Eval Week: Katrina Bledsoe on Qualitative Inquiry and Theory-driven Evaluation
QUAL Eval Week: Laurie Stevahn and Jean King on Essential Competencies for Effective Qualitative Evaluators
QUAL Eval Week: Norma Martinez-Rubin on Balancing Roles and Qualitative Inquiry for New, External Evaluators
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:25am</span>
|
Hello everyone—Laurie Stevahn (Seattle University) and Jean King (University of Minnesota) here—continuing to grapple with issues relevant to program evaluator competencies (whether essential sets exist) and usefulness (if enhanced practice results). In fact, for over a decade we have been working on a formal set of evaluator competencies, trying to answer the daunting question of what knowledge, skills, and dispositions distinguish the practice of professional program evaluators. Applying what we’ve learned to the work of "qualitative evaluators" didn’t quite make sense because an evaluator is neither qualitative nor quantitative. As we learned in Evaluation 101, methods come second, once we’ve established an evaluation’s purpose and overarching questions; methods do not guide evaluations. So a competent qualitative evaluator is first and foremost a competent evaluator. But what is a competent evaluator?
Rad Resources: Reading through sets of competencies—there is a growing number of them around the world—can be a helpful form of reflection. We synthesized four competency taxonomies:
Essential Competencies for Program Evaluators
Competencies for Canadian Evaluation Practice
International Board of Standards for Training, Performance, and Instruction (ibstpi) Evaluator Competencies
Professional Competencies of Qualitative Research Consultants
Lesson Learned: Thankfully, there was overlap across the domain sets. We can say with considerable confidence that a competent evaluator demonstrates competencies in five areas:
Professional—acts ethically/reflectively and enhances/advances professional practice.
Technical—applies appropriate methodology.
Situational—considers/analyzes context successfully.
Management—conducts/manages projects skillfully.
Interpersonal—interacts/communicates effectively and respectfully.
Lesson Learned: What distinguishes a competent qualitative evaluator? An enduring commitment to the qualitative paradigm. Qualitative evaluators understand and intentionally use the qualitative paradigm, choosing projects with questions that require answers from qualitative data. They need technical methodological expertise related to collecting, recording, and analyzing qualitative data.
Hot Tip: When using qualitative methods, focus on developing a special "sixth sense" to ensure a high-quality process and outcomes for qualitative studies. This is your ability to interact skillfully with a wide range of others throughout an evaluation to produce trustworthy and meaningful results. It involves interpersonal skills on steroids. A competent qualitative evaluator has to be attuned to social situations and skillfully interact with people in authentic ways from start to finish, knowing quickly when things are tanking.
Hot Tip: In the end, highly specialized sets of competencies unique to a particular evaluator role are less important than your commitment to engaging in ongoing reflection, self-assessment, and collaborative conversation about what effectiveness means in particular conditions and circumstances.
Rad Resource: Stevahn, L. and King, J. A. (2014). What does it take to be an effective qualitative evaluator? Essential Competencies. In Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice. Jossey-Bass, pp. 139-166.
The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Cheryl Poth on Articulating Program Evaluation Skills Using the CES Competencies
Scribing: Anne Vo on A Radically Different Approach to Evaluator Competencies
Kate McKegg on Developing Evaluator Competencies
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:25am</span>
|
My name is Michael Quinn Patton and I am an independent evaluation consultant. That means I make my living meeting my clients’ information needs. That’s how I came to engage in Utilization-Focused Evaluation. Utilization-focused evaluation does not depend on or advocate any particular evaluation content, model, method, theory, or even use. Rather, it is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation. In considering the rich and varied menu of evaluation, utilization-focused evaluation can include any evaluative purpose (formative, summative, developmental), any kind of data (quantitative, qualitative, mixed), any kind of design (e.g., naturalistic, experimental) and any kind of focus (processes, outcomes, impacts, costs, and cost-benefit, among many possibilities). Utilization-focused evaluation is a process for making decisions about these issues in collaboration with an identified group of primary users focusing on their intended uses of evaluation.
Hot Tip: Involve primary intended users in methods decisions. This enhances their understanding and capacity to make sense of and use findings. Because different methods involve different timelines and require different amounts of resources, for example, qualitative inquiry being especially labor-intensive because of the fieldwork involved, deliberation on and negotiation of methods decisions should be collaborative, not decided autonomously by the evaluator, despite her or his methodological expertise.
Hot Tip: Present methods options. In order for primary intended users to participate in methods and design deliberations, and make an informed decision about priority evaluation questions and appropriate methods, the utilization-focused evaluation facilitator must be able to present the primary data collection options, their strengths and weaknesses, and what makes them more or less appropriate for the evaluation issues at hand.
Hot Tip: Qualitative inquiry is always on the menu of options. The evaluator needs to understand and be sufficiently proficient at conducting qualitative evaluations to present it as a viable option and explain its particular niche and potential contributions for the evaluation being designed.
Hot Tip: Keep up-to-date with new developments in qualitative inquiry. New directions include uses of social media for data collection and reporting findings, increased use of visual data, and many new purposeful sampling options.
Rad Resources:
Patton, M.Q. (2014) Qualitative inquiry in utilization-focused evaluation. In Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice. Jossey-Bass, pp. 25-54.
Patton, M.Q. (2015) Qualitative Research and Evaluation methods, 4th Sage Publications.
Patton, M.Q. (2014) Top 10 Developments in Qualitative Evaluation for the Last Decade.
The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
ROE Week: Chris L. S. Coryn and Kristin A. Hobson on "The Dependability of Campbell Collaboration, Cochrane Collaboration, and What Works Clearinghouse Research Reviews"
Sheila B. Robinson on a Utilization-Focused Approach to Evaluation Learning and Last Minute Opportunity
QUAL Eval Week: Leslie Goodyear, Jennifer Jewiss, Janet Usinger and Eric Barela on Qualitative Inquiry in Evaluation
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:24am</span>
|
Hi. I am George Grob, an evaluation consultant focusing on policy development and advocacy. During 40 years of Federal service, mostly in the Department of Health and Health and Human Services), I learned that policy makers (Members of Congress and high level executives) are very interested in evaluations. They especially like observations, real life stories, and field reports.
Hot Tips:
#1. Hit the pavement. Some of our more compelling evaluations were based on onsite reviews and discussions with program beneficiaries. For example, in the face of a severe shortage of foster families, state and local agencies began intensive media campaigns to recruit them. That didn’t work. We were asked to find out why not. When we interviewed foster families they told us they joined up because they were impressed with other foster parents they had met. Our report showed that this informal foster parent network could be used as a far more powerful motivating force than the advertisements.
#2. Tell them something they don’t already know. Policy makers and their staff are very well read and constantly in touch with advocates and researchers. It is important to learn where their knowledge black holes are. That’s easy. They will tell you. Work on those. For example, the U.S. Surgeon General was getting worried about wine coolers. She was afraid that kids were being enticed into drinking what seemed like a harmless alcoholic drink that could be a gateway to heavier drinking. She was right. We found that convenience stores near schools were placing wine coolers in fruit juice aisles. Bottle labels obscured the alcoholic content-dark colored backgrounds with only slightly darker and similarly colored ink; small font size. We asked kids to select wine coolers from similarly bottled fruit juices. They couldn’t do it. Our report buttressed the Surgeon General’s campaign to reduce child consumption of wine coolers.
#3. Answer their questions. One of my favorite stories is about the distress of policy makers who noticed that a seemingly disproportionate amount of foster care dollars were being spent on administration instead of foster care payments. They were ready to carve out what they believed was waste and give it to the kids. They asked us to look into it. Our study found that most of what was labeled as "administrative costs" was case work—family studies to determine the best placement of a child and to prepare for court hearings. This was the very heart of the foster care system. The policy community sheathed their knives and looked deeper into the foster care system.
Rad Resource: Grob, G. (2014). Qualitative inquiry for policy makers. In Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice. Jossey-Bass, pp. 55-76.
The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Susan Kistler on Offbeat Southern California for Those at Evaluation 2011
Susan Eliot on Evaluation Stories
Bloggers Series: Susan Eliot on a Qualitative Research Blog
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:24am</span>
|
Greetings! I’m Katrina Bledsoe and I’m a research director at the Missouri-based DeBruce Foundation. The Foundation is currently working on starting a research institute that addresses issues related to education, community, and economic development. In my years of working with and in communities, I’ve found that qualitative inquiry is a foundational tool in being able to provide a good overview and to help tell a solid, systematic and robust story. I use the theory-driven evaluation (TDE) approach quite a bit and so have learned how to really make use of qualitative inquiry within that particular approach.
Hot Tip #1. Meeting with Stakeholders is Part of the Qualitative Inquiry Process. Many folks use meetings as a way to keep in touch, update funders and stakeholders, hammer out the contract and scope of work of the evaluation, and get a sense of the organization. But interactions and regular meetings with stakeholders also provide valuable qualitative information that can be used in the analysis of the data. They’re also great for developing the program theory. Meeting notes can demonstrate an overall picture and describe key players, key concerns, and provide a qualitative baseline by which change can be tracked.
Hot Tip #2 and a Rad Resource. The ubiquitous logic model has morphed and changed over time. Now the logic model can and is designed to represent the complex systems in which the programs, etc. operate. The linear logic model is becoming a bit 20th century; now the trend is to show multiple levels, multiple inputs, and multiple outcomes all in one system. My colleague and AEA member Tarek Azzam of Claremont Graduate University has some great models that are more interactive, see http://interactiveconcepts.info/files/LACOE_Logic_Model_try_2.swf for an example.
Hot Tip #3: Norms and Values Undergird Any Theory-driven Evaluation and A Good Graphic Facilitator Can Help Tease Them Out! Norms and values are really at the heart of an evaluation and the TDE approach is often used to articulate the qualitative and descriptive norms and values of the community, the context, and the program itself. Strategies that can get stakeholders to work on a more descriptive and qualitative level can guide them to develop more accurate outputs and outcomes. Because people often operate visually, we’ve been using visual/graphic facilitation to visually create the values and norms that one holds. A Rad Resource I’ve recently come across is Image think, a group devoted to understanding visual facilitation, and providing resources and services.
Rad Resource: Bledsoe, K. (2014). Qualitative inquiry within theory-driven evaluation: perspectives and future directions. . In Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice. Jossey-Bass, pp. 77-98.
The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
QUAL Eval Week: Leslie Goodyear, Jennifer Jewiss, Janet Usinger and Eric Barela on Qualitative Inquiry in Evaluation
Christopher Moore on Structural Equation Modeling for Theory-Driven Evaluation
Charles Gasper on Logic Models
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:23am</span>
|
Hello, I’m Norma Martinez-Rubin, a public health practitioner, program evaluator, and occasional trainer. Work projects that integrate opportunities to learn about the people for whom they are designed excite me. Hence, I find qualitative inquiry quite fitting. Focus groups and semi-structured interviews have been primary data-collection methods on evaluation studies I’ve led, guided, or to which I’ve contributed. They’ve yielded candid remarks and surprising insights about mainstream topics. At times, too, once guarded opinions have become less so when moderated discussions foster honest expression intended for project improvement.
How does one go about identifying the best course of inquiry? What methodological decisions must be made to maximize use of limited time and funding to produce maximum results? How, in a culturally appropriate way, does one acknowledge study participants’ involvement? Is there a necessity to balance the evaluator role differently when taking a consultative approach as a new, external evaluator?
Lesson Learned: Much planning, discussion, drafting protocols, redesigning, and renegotiating occurs before implementing an evaluation study. Whether working solo or collaboratively with study sponsors and other colleagues, one has to establish and foster working relationships to carry out an evaluation that yields useful findings.
Lesson Learned: Transitioning from one professional approach to another, from internal to external evaluator or vice versa requires taking stock of one’s professional strengths and using them as levers. (Quashing those strengths serves no one.)
Lesson Learned: Evaluation requires a knack for relationship building. Introverts’ inquiring minds and predilection for reflection are advantageous attributes in qualitative inquiry. They are usable when moderating discussion groups and for the focus required when doing data analysis.
Hot Tip: Quickly building rapport serves to trigger rich discussions. Never mind the misplaced argument for maintaining a sense of objective neutrality. Cold, calculated exchange is simply that: cold, calculated exchange, not genuine communication or inquiry.
Rad Resource: The editors of Qualitative Inquiry in Evaluation (Jossey-Bass, 2014) compiled authors’ research and experiences that illuminate the theory, purpose, and application of qualitative inquiry. The editors write of their own discoveries in the process of producing the book. They also invite the reader to examine what constitutes quality in qualitative inquiry.
Rad Resource: The chapter titled, Balancing Inside-Outsider Roles as a New, External Evaluator in Qualitative Inquiry in Evaluation: From Theory to Practice (Jossey-Bass, 2014) illustrates how personal curiosity, professional training, and personal experiences can function as levers when designing and implementing protocols for focus groups and semi-structured interviews.
The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
QUAL Eval Week: George Grob on Qualitative Evaluation for Policy Makers
QUAL Eval Week: Leslie Goodyear, Jennifer Jewiss, Janet Usinger and Eric Barela on Qualitative Inquiry in Evaluation
QUAL Eval Week: Laurie Stevahn and Jean King on Essential Competencies for Effective Qualitative Evaluators
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:22am</span>
|
Hello, my name is Jayne Corso and I am the community manager for American Evaluation Association. From my past posts, you can see that I love social media. I see these platforms as never-ending conferences, where you can meet and share ideas with people with similar interests. Although I have a tendency to favor social platforms such as Facebook and Twitter, one must not forget the importance of the blog.
Even the most robust social media presence cannot substitute the benefits of an active and engaging blog. Social media platforms come and go—do you remember Myspace, Friendster, Geocities? Blog domains are stable and are always available giving you firm control over your online publishing.
Rad Resource: Blogging Creates Creditability
Sharing advice through blogs establishes you as an industry expert or a subject matter expert. It presents your name to other professionals as a trust-worthy resource. Blogging also gives you the opportunity to establish your own brand—which includes writing style and design.
Rad Resource: Make connections through blogging
The evaluation field is filled with bloggers. This opens many opportunities for cross-blogging. Cross-blogging occurs when bloggers with similar interests become guest writers on each other’s blogs. This introduces your blog to new audiences within the field and further shows your credibility on the subject matter.
Blog posts are also very easy to share on social media outlets. Most blogs have a share button that generates a post for you. If others find your blog post helpful, this posting tool can help create a lot of engagement.
Hot Tip: Keep your blog up-to-date
Your blog is only as strong as its content—so keep your blog active and up-to-date. Blogs should be updated very frequently. I would recommend creating a new post at least once a week to keep people interested in what you have to say. I would also recommend dating all of your blogs so your readers have a sense of relevance and context.
Rad Resources: Here are some great evaluation blogs to follow and learn from.
Ann K. Emery Data Analysis +visualization
BetterEvaluation
EVALBLOG: John Gargani’s blog about program design and evaluation
Eval Central
EVALUSPHERIC PERCEPTIONS Reflections of an everyday evaluator/learner/educator exploring the evalusphere
Evergreen Data
Free-range evaluation
Fresh Spectrum
The Listening Resource
Find more bloggers on the AEA website and stay tuned for Bloggers week coming soon on AEA365!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Susan Kistler on Profiling Your Blog on aea365
Susan Kistler on Learning From Evaluation Bloggers
Bloggers Series: Melissa Cater on eXtension Evaluation Community Blog
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:21am</span>
|
Anne Cullen Puente and April Bender on 6 Ways to Sharpen Your Evaluation with Mindfulness Principles
Hi, we’re Anne Cullen Puente and April Bender, evaluators at the Fetzer Institute in Kalamazoo, Michigan. We were really excited by Michael Quinn Patton’s June 2014 article in the American Journal of Evaluation where he encouraged evaluators to develop our self-reflexivity and deeply consider our own thinking patterns. We couldn’t agree more, and believe that mindful evaluation could be helpful for doing just that.
Mindfulness can be thought of as a theoretical construct, a type of awareness, and/or specific meditation practices. Some people approach mindfulness through meditation; others by simply adopting a lens of curiosity. The underlying thread is awareness of the present moment. The now.
Mindful evaluation isn’t a specific method, but rather an invitation to be deeply aware and present in all stages of the evaluation process. When we do so, we no longer operate on autopilot but give thought to what we are doing, why we are doing it, and how we are doing it.
Hot Tips: Here are six tips for building the self-reflexivity Patton advises:
Set an intention to be more mindful. Research shows that outcomes correlate with intentions, so having a personal vision for what you want to get out of your mindfulness practice is important.
Bring full attention. Mindfulness is all about cultivating attention to thoughts, feelings, and actions. One easy way to do this is to minimize distraction. Whether writing a report or analyzing data, do just that. Avoid multi-tasking, don’t work with 20 browsers/windows open, etc….
Become aware. People who practice mindfulness report greater insight into their mental processes and emotions. Ask questions like, What am I thinking? What am I feeling? What does this remind me of and why? What are my intended goals?
Practice deep listening. When you listen to others, do so with genuine curiosity. Often our training in social science methods kicks in automatically. We start to analyze what we’re hearing while the other person is talking. This can be helpful in reducing the amount of time we spend analyzing later, but it interferes with our ability to truly listen.
Stay curious and open. Have fun and experiment. Don’t operate on autopilot. Try to stay open to novel ideas, approaches, opportunities, and differing perspectives.
Suspend judgment. Evaluation is all about judgment—evaluative judgments based on quality evidence and standards. But people tend to make opinions based on biases and preconceived ideas. Consider a more mindful approach: examine your motivations, recognize that you have an opinion and set that opinion aside, seek alternative hypotheses, ask clarifying questions, gather evidence, and THEN make an evaluative judgment.
Rad Resources:
Center for Investigating Healthy Minds
The Center for Contemplative Mind in Society
Fetzer Institute blog
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Claire Tourmen on Introducing Evaluation
Cultural Competence Week: Lisa Aponte-Soto and Leah Neubauer on Culturally Responsive Evaluation 101
OPEN Week: Erin Stack & Lindsey Patterson on Successfully Transitioning from Student to Professional
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:20am</span>
|
I’m Aimee White, new Board Member At Large. Thank you all who elected me, I will strive to represent you well. I live in Washington State but have roots from the southeastern part of the US. I own and serve as Principal Evaluator for Custom Evaluation Services, an independent evaluation consulting firm specializing in Collaborative, Participatory, and Empowerment Evaluation principles and practices. We deeply value and respect the contexts and complexities within which our nonprofit clients serve their communities, and strive to provide exemplary evaluation services. I’m honored to be serving you and urge you to reach out to me to share your thoughts about how we can improve.
I began my evaluation practice in service to the tiniest nonprofits. It was critical that I engage the staff in all aspects of my work, they were the evaluation resources! I find unique ways to involve as many people as I can in the evaluation design process, data collection, and reporting. I do this knowing that many in social and human service agencies are overworked, not trained evaluators, and have no time for one more thing to do.
Lessons Learned: I’m grateful for Social Work training, I learned the phrase "meet people where they are". It is critical that no matter how "expert" you are, in whatever method or process of evaluation that you are hoping to use, if you cannot meet the clients on their level of understanding the project will go nowhere. It is critical, as a community-based evaluator, that you find ways to challenge your own use of jargon, and seek to use the language of the clients. The goal is always to walk together through the learning process we call evaluation.
Rad Resource: In the high tech and overly complex world of today I take a risk by offering an incredibly "low tech" idea. I created something called "one line logic model" that can be used at program, management, and even coalition levels. Take "one line" out of a logic model and put it before staff/leadership for some small period of time. The front line staff are looking at the "activities" and intentionally studying and improving their performance, possibly looking into implementation guides and improving fidelity for Evidence-Based programs. The data person is pulling the data for that line, checking the input processes among the staff, and improving reporting or visualization processes around that set of data. Management is checking to see that resources and activities are accurately and appropriately linked, and that outputs and outcomes are emerging from that work. I’ve found that this takes what can be an inactive document and make it alive and meaningful. Try making your evaluation processes more usable!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
NA Week: Sue Hamann on Including Intended Service Recipients in Needs Assessment and Program Planning
APC Week: Rhonda Schlangen on Joint Evaluation Strategies for Advocacy and Services
Jacquelyn Christensen on Wordle and Survey Anchors
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:19am</span>
|
I’m Jennifer Grove, Prevention Outreach Coordinator at the National Sexual Violence Resource Center (NSVRC), a technical assistance provider for anti-sexual violence programs throughout the country. I’ve worked in this movement for nearly 17 years, but when it comes to evaluation work, I’m a newbie. Evaluation has been an area of interest for programs for several years now, as many non-profit organizations are tasked with showing funders that sexual violence prevention work is valuable. But how do you provide resources and training on a subject that you don’t quite understand yourself? Here are a few of the lessons I’ve learned on my journey so far.
Lesson Learned: An organizational commitment to evaluation is vital. I’ve seen programs that say they are committed to evaluation hire an evaluator to do the work. This approach is shortsighted. When an organization invests all of its time and energy into one person doing all of the work, what happens when that person leaves? We like to think of evaluation as long-term and integrated into every aspect of an organization. Here at the NSVRC, we developed a Core Evaluation Team made up of staff who care about or are responsible for evaluation. We contracted with an evaluator to provide training, guide us through hands-on evaluation projects, and provide guidance to the Team over the course of a few years. We are now two years into the process, and while there have been some staffing changes that have resulted in changes to the Team structure, efforts have continued without interruption.
Lesson Learned: Evaluation capacity-building takes time. We received training on the various aspects of evaluation and engaged in an internal evaluation project (complete with logic model, interview protocol, coding, and final report). According to the timeline we developed at the beginning of the process, this should have taken about eight months. In reality, it took over 12. The lesson learned here is this: most organizations do not have the luxury of stopping operations so that staff can spend all of their time training and building their skills for evaluation. The capacity-building work happens in conjunction with all of the other work the organization is tasked with completing. Flexibility is key.
Hot Tip: Share what you’ve learned. The most important part of this experience is being able to share what we are learning with others. As we move through our evaluation trainings, we are capturing our lessons learned and collecting evaluation resources so that we can share them with others in the course of our technical assistance and resource provision.
Rad Resource: Check out an online learning course developed by the NSVRC, Evaluating Sexual Violence Prevention Programs: Steps and strategies for preventionists.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Liz Zadnik on Bringing passion and enthusiasm to program evaluation
Sheila B Robinson on Introducing our Newest aea365 Team Members
CP TIG Week: Helen Singer, Sally Thigpen, and Natalie Wilkins on Understanding Evidence: CDC’s Interactive Tool to Support Evidence-Based Decision Making
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:19am</span>
|
I am Anna Douglas and I conduct evaluation and assessment research with Purdue University’s Institute for Precollege Engineering Research; also known as INSPIRE. This post is about finding and selecting assessments to use in evaluation of engineering education programs.
Recent years have seen an increase in science, technology, engineering, and mathematics (STEM) education initiatives and emphasis on bringing engineering learning opportunities to students of all ages. However, in my experience, it can be difficult for evaluators to locate assessments related to learning or attitudes about engineering. When STEM assessment instruments are found, oftentimes they do not include anything specifically about engineering. Fortunately, there are some places devoted specifically to engineering education assessment and evaluation.
Rad Resource: INSPIRE has an Assessment Center website, which provides access to engineering education assessment instruments and makes the evidence for validity publicly available. In addition, INSPIRE has links to other assessment resources, such as Assessing Women and Men in Engineering, a program affiliated with Penn State University.
Rad Resource: ASSESS Engineering Education is a search engine for engineering education assessment instruments.
If you don’t find what you are looking for at the INSPIRE, AWE, or ASSESS databases, help may still be there.
Lesson Learned #1: If it is important enough to be measured for our project, someone has probably measured it (or something similar) before. Even though evaluators may not have access to engineering education or other educational journals, one place to search is Google Scholar with keywords related to what you are looking for. This helps to 1) locate research being conducted in the similar engineering education area (and they may have used some type of assessment) and 2) locate published instruments, which one would expect has a degree of validity evidence.
Lesson Learned #2: People that develop surveys, generally like others to use them. It’s a compliment. It is ok to contact the authors for permission to use the survey and validity evidence collected, even if you can not access the article. At INSPIRE, we are constantly involved in the assessment development process. When someone contacts us for use of an instrument, we view that as a "win-win"… the evaluator gets a tool, our instrument gets used, and with the sharing of data and/or results, we can get further information about how the instrument is functioning in different settings.
Lessons Learned #3: STEM evaluators are in this together. Another great way to locate assessment instruments is to post through the STEM RIG in LinkedIN, or pose the question to the EvalTalk listserv. This goes back to Lesson Learned #1: most of the important outcomes are being measured by others.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Hui-Hui Wang on Assessing STEM Education
STEM Week: Alyssa Na’im on Using Culturally and Contextually Responsive Practices in STEM Education Evaluation
Tina Phillips on Evaluating Public Participation in Science Research (PPSR) Projects
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:19am</span>
|
Hey there. My name is Kath McNiff and I’m an online community manager at QSR International (the makers of NVivo).
Lessons Learned: We’re heading into a brand new year of evaluation (actually, THE year of evaluation). Oh, the joys of a clean slate! A chance to right the wrongs, sharpen the tools, clear the decks and morph into the best 2015 version of ourselves.
Here are some resolutions to consider:
Do an Inbox detoxIs your cluttered inbox making you feel overwhelmed and out of control? Vital details can easily slip through the cracks as you flounder around in the digital debris. Get yourself a copy of How to be a Productivity Ninja and follow Graham Allcott’s simple steps for getting your inbox to zero.
Decide on your Digital Strategy Have you been following the same old tweeters for the past two years? And what about those cobwebs on your LinkedIn profile? It’s time to make sure your digital footprint is polished and professional. If you’re looking at social media as a rich reserve of qualitative data, you need to make decisions about platforms, data collection and ethics.
Get the right tools Seriously consider using the free version of Evernote. While you’re out in the field you can use your phone or tablet to record interviews, take field notes, snap photos and clip relevant content from the web. Then, back at your desk you can synch notebooks and have easy access to all your material (and then bring it into NVivo for analysis).
Develop good NVivo habits Bring your research design documents into NVivo and refer to them regularly as you analyze your data. Start a project journal in NVivo and write, write, write - remembering to link to the data that supports your emerging insights. Then, when a client demands to know how you reached your conclusions - you can turn to your journal (complete with charts, word clouds and models). Check the QSR website for details about pricing.
Well, that should get us to February at least. Start the year feeling in control of your virtual world so you can spend more time celebrating in the physical one :
Hot Tip: Take a fresh look at the tools you use everyday. Are you missing out on some really useful feature because you always follow the same well-worn path? In the case of NVivo, spend some time watching the YouTube videos or read the help - and explore features you haven’t used before (framework matrices anyone?).
Rad Resources: If you want to get your ToDo list under control, try these free apps: Todoist or Trello
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Anna Jo Perry and Jens J. Hansen on Computer Aided Evaluation Reports
Käri Greene on NVivo
Bob Kahle on Practical Strategies to Leverage Technology for Deeper Insights
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:18am</span>
|
My name is Petra-Chambers-Sinclair and I’m a devout evaluation nerd and Biohacker from Victoria, British Columbia. Fellow evaluators understand when I explain that, as a Biohacker, I apply evaluative thinking to my personal life with the goal of maximizing my well-being. I run nutritional and lifestyle experiments, and then use data to fine-tune the strategies I use to achieve my goals.
Hot Tip: An evaluative mindset is all that Biohacking requires.
Summative Evaluation
Biohackers use summative evaluation when following an established dietary protocol or a lifestyle hack, such as one aimed at reversing autoimmune or improving sleep quality. Biohackers first gather data for the baseline, then engage with the protocol as prescribed, and gather data again at the end. As with any summative evaluation, the final stage is determining if it’s worth continuing with the hack.
Formative Evaluation
If, after engaging in a nutritional or lifestyle hack for a reasonable length of time, desired interim results aren’t being realized, biohackers assess fidelity to the protocol. If fidelity is perfect, formative adjustments can be implemented and subsequent results monitored.
Developmental Evaluation
Developmental Biohacking enables learning and engagement when there is no predetermined intervention; when conditions are complex and causality is hard to track. It enables biohackers to develop an intervention inside the mess of real life. Biohackers go developmental when nothing else seems to be working and when environments both inside and outside the body are unpredictable, such as with health issues that no one can explain or offer effective treatment for. Gradually, through self-experimentation and relentless reality-testing, an intervention might emerge that can be evaluated formatively or summatively. In the meantime, developmental evaluation allows for interaction, observation and adaptation.
The internet has allowed people who are innovating in this way to communicate the results of their experiments with each other, enabling Developmental Biohackers to accelerate learning & pattern-finding, thereby creating highly effective and scaleable interventions, such as the Autoimmune Protocol. Biohacking blogs remain a primary source of developmental information.
N=1
An N=1 is an experiment with one participant.
The assumption inherent to N=1 in the Biohacking world is that universal solutions to complex health problems have limited effectiveness, as we each have unique histories, genetic profiles, environments, and patterns of responding.
We are complex systems living in complex systems.
And an evaluative mindset is all we need to leverage this complexity on behalf of increased health & well-being.
Rad Resources:
A Beginners Guide to Biohacking by Mark Moschel https://www.bulletproofexec.com/beginners-guide-to-biohacking-101/
And another one, if it’s not considered self-promotion:
Biohacking: the ultimate New Year’s Resolution (because it’s not too late to make one!) http://petra8paleo.com/2014/12/26/biohacking-the-ultimate-new-years-resolution/
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
DE Week: Michael Quinn Patton on Developmental Evaluation
Tom Archibald and Jane Buckley on Evaluative Thinking: The ‘Je Ne Sais Quoi’ of Evaluation Capacity Building and Evaluation Practice
Systems Week: Glenda Eoyang on Complexity Demands Simplicity
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:18am</span>
|
Greetings loyal aea365 readers AND authors! I’m Sheila B Robinson, aea365’s Lead Curator and sometimes Saturday contributor. A couple of weeks ago, I wrote this post asking readers what they would like to read about on aea365 in 2015.
Many thanks to those who offered their comments with ideas for posts and responses to each other. Now, I’m writing to ask YOU to consider contributing a post on one of these suggested topics, or any other evaluation-related topic.
Lesson Learned: Here’s what our readers said they would like to see:
Some comments on developing, sharing, and storing lessons learned from evaluations.
A post, or series of posts about evaluation from the perspective of practitioners for whom their primary job is not evaluation. Perhaps tips on how to best integrate evaluation into the myriad of other, seemingly more pressing, tasks without pushing it to the back burner.
More about observational studies and the best practices in using secondary data sources.
More of other evaluators’ ideas for how to make evaluation more useful for finding solutions to social problems and building a better world.
Evaluators doing research on evaluation - what are they studying and what are they learning?
More on collective impact and experiences in evaluation of partnerships.
Hear from evaluators of all ages and those working with diverse communities
More about how private consultants deal with the cost of SPSS…it causes a lot of angst. The other stat packs aren’t user friendly if you have always used SPSS.
Case studies with operational survey questions and instruments.
Hot Tip: We’d love to hear from YOU! Do you have something to share on any of these or other evaluation-related topics? Please send us a draft post for consideration.
Hot Tip: You don’t have to be an expert to contribute! Notice that none of the comments ask for "expert" opinions. Many readers want to hear from everyday evaluators working in the field. You don’t need to be doing something unusual, or cutting edge, or revolutionary. Tell us how a strategy has worked for you in evaluation. Tell us what you’re learning about and experimenting with. Tell us about a lesson you have learned recently and are now applying in your work. Tell us about a book you have read, a course you took, or an experience you had that gave you new insight. What small lesson can you offer to teach others?
Cool Trick: Please follow contribution guidelines! You can find the link right up there…yes, just look up to the top of your screen and there it is! We can only publish posts that adhere to these simple guidelines.
Get Involved: It’s time to share YOUR insights with aea365 readers! We rely on the hundreds of generous authors who have contributed (many multiple times!) over the past 5 years to keep this daily blog going. As you can imagine, collecting 365 posts each year is no small task.
Is this the year you decide to contribute? We certainly hope so, and look forward to hearing from you soon.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Best of aea365: Sheila B. Robinson on Joining the aea365 Author Community
Sheila B. Robinson on Joining the aea365 Author Community
Sheila B Robinson on Calling all aea365 readers: What do YOU want to read more about?
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:17am</span>
|
I’m Wendy Tackett, the president of iEval, a part-time faculty member at Western Michigan University, and a new blogger in 2014.
Rad Resource - Carpe Diem: Make Your Evaluations Useful! Three of my iEval colleagues and I started this weekly blog last June primarily to help evaluators make their evaluations more useful.
Hot Tips - Favorite posts: Here are some of our most referenced posts to date, listed chronologically:
Week 5: Using a ‘Top of the Mind Memo’ - Sometimes even interim reports can be too cumbersome, so timely, relevant feedback presented in a short format can be very useful as a check point for clients.
Week 16: Avoiding misuse: The evaluation client’s perspective - While most of our readers seem to be evaluators, it’s helpful to step back and think from the client perspective at times.
Week 18: Miss America: The nation’s largest scholarship program for women - Evaluation really is all around us, and this comical perspective on creating a logic model reminds us of this, while also reminding us to humanize the work we are doing.
Week 20: I was at the European Evaluation Society Biennial Conference - It’s always enlightening to listen to Michael Scriven and try to apply his ideas to your own work.
Lessons Learned - Why I blog: My focus has always been the meaningful use of evaluation findings. I don’t do evaluation for the sake of research, to merely fulfill funder requirements, or to have my work never to be used. I want clients to benefit from the work and use it to make improvements and decisions. The Carpe Diem blog is one way to help others achieve that. I also provide evaluation training by teaching graduate students in evaluation, presenting at conferences, and serving on the Michigan Association for Evaluation board…all with the purpose of helping others practically apply their high quality evaluation findings.
Lessons Learned - What I’ve learned: First, it’s been great using the team approach - we share ideas with each other and build off of each other’s work. It takes the weekly burden off of one person and lets the readers benefit from multiple perspectives. Second, the online interaction with readers hasn’t been what we expected. Most of our feedback is direct - emails, conferences, personal contact. While we may not get regular, immediate feedback on each post, we’ve gotten enough to think we’re resonating with some people and encourage us to continue!
Hot Tip - I think a good post is a mix of information, usefulness, and fun. We try to share useful information, provide tips that can immediately be used, and end with a silly picture of the blogger that week - just to remind people to have fun with evaluation!
This winter, we’re continuing our occasional series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
Wendy Tackett and Joseph Trommater on Local Evaluation Capacity Building
Kathleen Dowell on the Evaluation Client Feedback Form
Sheila B Robinson on A Call for Blog Posts!
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:16am</span>
|
I’m Carrie Tanasichuk, a relatively new evaluation blogger. I currently work in the Program Research & Development department at the YMCA of Greater Toronto, where we support the Association to assess impact and improve programming across the YMCA as well as in the larger community.
Rad Resource - CarrieTanasichuk.com: Broadly speaking, my blog is about anything to do with research and evaluation. I’m a social psychologist by training and my favourite thing to write about is how social psychological theory can be useful to evaluation. I also find data visualization fascinating and I like to posts tips and tricks that I learn. I’m currently branching out and learning R, and I would like to start posting about that in the future.
Hot Tips: Favorite posts:
Measuring attitudes that predict behaviours - Using attitudinal survey questions to predict future behaviour is something that comes up repeatedly when discussing the external validity of evaluation findings. In this post I look at the theory behind attitudes predicting behaviour.
Developing valid self-report measures - Self-report measures are widely used in evaluation, but lately I’ve come across several people (not evaluators) who are quick to dismiss them. I wanted to do some background research on how to make self-report measures as valid as possible.
A simple GIF illustrating dataviz principles - I didn’t create this GIF, but I love how it communicates a lot of principles very simply. I’ve shared it with a lot of people and they’ve all loved it, too.
Lessons Learned: Why I blog: I’ve kept a personal blog in one form or another for over 12 years. I’ve blogged about everything from cooking, running, knitting, and backpacking! I first came across evaluation blogs 3 years ago when I started reading AEA365. I toyed with the idea of starting an evaluation blog but I was hesitant - would I have anything valuable to add to the conversation? Would anyone even read my posts? I finally decided to take the plunge and I’ve been overwhelmed at how friendly and welcoming other evaluation bloggers have been!
Lessons Learned: What I’ve learned: I sometimes hesitate to publish a post if I don’t feel like I’m an "expert" on the topic. I read a tip from Susan Kistler that you can blog as a fellow learner, rather than as an expert. I think this is wonderful advice and something that I have tried to take to heart. Another lesson learned is to constantly draft posts. Whenever I have an idea for a post I quickly create a draft and jot down some notes. Sometimes I go back and flesh it out to be a full post (and sometimes the draft will sit there forever collecting dust).
This winter, we’re continuing our occasional series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Related posts:
PD Presenters Week: Kaia Ambrose and Simon Hearn on A Whistle-Stop Tour of Outcome Mapping
PD Presenters Week: Mindy Hightower King and Courtney Brown on A Framework for Developing High Quality Performance Measurement Systems of Evaluation
Jonny Morell on Logic Models and Unintended Consequences
AEA365
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 28, 2015 06:15am</span>
|