Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

I’m pleased to announce that training and development professionals can now participate in virtual webinars to get certified in my Predictive Evaluation (PE) model, the first and only training and evaluation approach that adds the element of prediction.  My groundbreaking PE methodology allows trainers and business leaders to successfully predict training’s results, value, intention, adoption and impact, allowing them to make smarter, more strategic training and evaluation investments.  I’m now offering a virtual training program to certify other trainers in the PE approach.   March 2013 Session with virtual classes on March 13, 20, 27, April 3 2013 | 7-9PM EST My PE Certification program is designed to address the needs of professionals at all levels of training evaluation proficiency.  My PE workshop provides Predictive Evaluation training in an interactive, practical forum. Participants will explore the reasons why evaluation is critical to training success and hear case studies and best practices from companies that have used the Predictive Evaluation model effectively.  They’ll learn how to implement Predictive Evaluation, why it’s important, and how to use it to maximize training ROI. In these web-based classes, I will teach participants all about Predictive Evaluation and how to apply it to maximize the results of their training initiatives.  During this course, participants will: Predict Training’s Value Conduct Intention, Adoption, and Impact Evaluation As part of the certification process, participants will receive a signed copy of my latest book Predictive Evaluation, the Predictive Evaluation Companion Workbook, along with the tools I use, so they can effectively and efficiently conduct their own evaluations. For more information, please contact me at valelearn@gmail.com.  REGISTER HERE for the upcoming PE Certification class.
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:49am</span>
Are you fighting to justify training’s value within your organization? Does your organization view training as an expense versus an investment with predicted return? Do you need a method of predicting (forecasting) the training’s value to help decide whether to train? Are your current evaluation efforts always "after the fact"? Do you want to measure success using leading indicators that drive continuous improvement? If you answered yes to any of the above, then consider becoming PE Certified.  For the first time ever, I am offering training and development professionals the chance to become certified in my Predictive Evaluation Model, an approach that adds the element of prediction to ensure that training delivers business and organizational results.  
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:49am</span>
Over the past few years I have been pondering the question "Is training an outcome driven business or an activity driven function?"  Think about it. Too many times we design training that links content to intended business results, deliver the course resulting in participant skill/knowledge change, but then just report the activity. What I continually see in my experience are activity measures being reported by training functions versus the key impact to business results.  For example, those being: Number of employees trained Hours trained Cost to train versus budget Level 1 end-of-course reaction results (one could argue that these data begin to report outcomes - those being participants’ reaction or satisfaction with the training experience) I believe that these metrics are necessary and essential but they are activity metrics not outcome measures which are even more essential.  Outcome measures should state the value that training has brought to the business.  In other words: impact.  So what are some outcome driven business metrics?  There are many and they need to be chosen so that it is within the context of your company and its typical reporting measures.  Here is the approach I use: If you think of a training program as having a product life cycle (the stages through which a course bypasses) consider using 5 stages: 1) Forecast, 2) Design & Development, 3) Delivery, 4) Transfer, and 5) Impact. Forecast. This helps management in its attempts to cope with the uncertainty of the future, relying mainly on data from the past and present and analysis of trends. Forecasting starts with certain assumptions based on the management’s experience, knowledge, and judgment. Elements I use in a training forecast are: Budget for all stages Design & development cycle time Number of employees trained Predictive Evaluation Impact Matrix (Intention, Adoption, Impact) Predictive Evaluation ROI Forecasting usually is coupled with a needs analysis in which the learning objectives are provided. Delivery. The activity metrics used in delivery are typically: Expense versus budget Number of employees trained Training hours delivered During delivery, outcome measures can be used. The measures I use are: End of course reaction Predictive Evaluation Intention Score Predictive Evaluation Belief Score Transfer.  The transfer outcome measure is the rate and degree that participants have applied training skills on the job.  The transfer outcome measure I use is the Predictive Evaluation Adoption Rate. Impact.  Successful transfer should lead to increased employee and organizational performance which results in Impact - tangible business results from training.  The impact outcome measure I use is the Predictive Evaluation ROI. Thank you for reading and as always, I invite your comments and thoughts.
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:49am</span>
Last week I spoke at and attended the Training 2013 Conference in Orlando, FL. These are always wonderful times for me because for 3 solid days I just think about the training business which is my passion. I look for ways to be better. I and think of how the training function could be better. I observe and spend time thinking about how I can contribute to this wonderful profession that I am blessed to part of. Late one afternoon I wrote in my notes: "So many times we make decisions about training based too much on emotions". Think of a time where you were preparing a training plan for a senior executive., where due to your needs analysis you had bulletproof facts, reason, and logic on your side, and believed there was absolutely no way the executive could say no to your perfectly constructed training proposal. To do so would be impossible, you figured, because there was no other logical solution or answer. The executive listened politely and then alternatively told you exactly what to train and how. She dug in her heels on denying your proposal and refused to budge. She wasn’t swayed by your logic or needs analysis. Were you flabbergasted? I am embarrassed to say that this has happened to me way too many times. Because my approach to this is usually as a set of facts, I are was doomed to fail because decision-making isn’t just logical - according to the latest findings in neuroscience, it’s EMOTIONAL. So at the point of decision, remember that emotions are very important for choosing. In fact, even with what we believe are logical decisions, the very point of choice is arguably always based on emotion. This finding has enormous implications for training professionals. Bringing emotions into business decisions used to be considered taboo, but times have changed dramatically. People who believe they can build a case for their training program using only reason and numbers are in trouble because they don’t understand the real factors that are driving the other party to come to a final decision. Those who base their training strategy on logic end up relying on assumptions, guesses, and opinions. I’ve always thought that if my analysis is logical, then leaders can’t argue with it and are bound to come around to my way of thinking. The problem is, you can’t assume that the other party will see things your way. I’m a business guy in a trainer’s body. I love metrics - my books tell that story. Forecast, measure, and improve. Period. For me numbers and charts are the core to understanding the success of training. Until now. While I still believe that metrics are essential, I have now added emotional decision- making to my proposals. Imagine if. A good friend, Elizabeth Doty, and I are members of the the Berrett-Koehler Authors Cooperative board of directors. Recently, Elizabeth and a group of authors were attempting to get a new program started with the Co-op but the leadership team was not listening. Then on for a board conference call, she put together a scenario which she titled, "Imagine If". She took walked the leadership team through a scenario that showed the future and the benefits of program - the beauty in the this approach was that she touched on their emotions while incorporating costs and return on investments. It worked! They listened and unanimously approved the program. I now use this approach with all my proposals and I find that I am closing more deals and getting stronger buy-in than ever before. Try it and see how it works for you. Thank you for reading and as always, I invite your comments and thoughts.
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:49am</span>
I am always thinking of ways to make training transfer and Adoption Evaluation better, more efficient, valid, reliable, and meaningful to participants. I have decided to pilot the use of a decision diary as a post-course transfer activity and then use that data collected as part of my Adoption Evaluation work with one of my clients. Before I employ this, please take a few moments to look at what I am thinking about. Do you have any suggestions? Have you done anything like this before? If yes, what were your lessons learned? My definition of a decision diary for training transfer is … a record of decisions made by the participant post-program associated with the use of skills learned, together with any assumptions made and the reasoning employed. The record is used to derive lessons to assist future decision-making and evaluation of the Adoption Rate of predicted on-the-job behaviors. I provide participants with an on-line system where they can easily capture their decisions using the form below. During the course, we train participants on how to use the system and when to update. I recommend 5 minutes at the end each day for 2-4 weeks.     Weekly I compile the dairy entries and send them to each participant for their records.  I then compile all the data and use it as part of my Adoption Evaluation data collection methods and include it in my analysis. So what do you think?  Worth the effort?  Doomed for failure?  Any thoughts you can provide would be greatly appreciated?
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:48am</span>
All of us in the T&D field want to provide our clients (internal & external) with information that is valuable to them in an easy and understandable way.  You may consider using the  dashboard method. I use three types of measure in my dashboards: (1) activity, (2) quality, and (3) value. For activity, I include: Number of students trained versus projected Costs versus budget For quality, I use elements from my Predictive Evaluation model:  Intention Score - An Intention Evaluation addresses the following question: Are participant goals and beliefs upon course completion aligned with desired goals? Intention Goals are the planned actions that we want participants to author during training as their personal transfer plan. If training is working as designed, participants should be writing actions that drive desired adoptive behaviors back into their everyday work patterns. Adoption Rate -  An Adoption Evaluation addresses the following question: How much of the training has been implemented on the job and successfully integrated into the participant’s work behavior? Adoption Evaluation measures participant goal completion rate against the defined Adoption Rate created when you predicted training’s value. Data is collected on the success that participants have had in transferring their goals to the workplace, results are analyzed and compared to the Adoption Rate, and an Adoption Dashboard is produced to report findings. If needed, corrective actions are put in place to improve adoption. For value, I use the Impact element my Predictive Evaluation model.  Impact Evaluation is identifying the direct impact on - and value to - the business that can be traced to training. It assesses in quantifiable terms the value of the training by assessing which Adoptive Behaviors have made a measurable difference. Impact evaluation goes beyond assessing the degree to which participants are using what was learned; it provides a reliable and valid measure of the results of the training to the organization. You can find examples of these dashboards here. What other elements have you found that have worked well in your dashboards?
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:48am</span>
Please join me for the first webinar from my webinar series Making the Most of Corporate Training Dollars: Predicting Training’s ROI.  Thursday, June 13, 2013  2:00-3:00 PM EST. Do you struggle to define training’s success? Are you fighting to justify the training’s value within your organization? Does your organization view training as an expense versus an investment with predicted return? Do you need a method of predicting (forecasting) the training’s value to help decide whether to train? Are your current evaluation efforts always "after the fact"? Do you want to measure success using leading indicators that drive continuous improvement?  WHAT YOU WILL LEARN In this complimentary webinar, training and evaluation I will spotlight Predictive Evaluation.  Using my innovative new Predictive Evaluation (PE) Model, trainers and business leaders can now successfully predict training’s ROI allowing them to make smarter, more strategic training and evaluation investments. Webinar attendees will learn how Predictive Evaluation enables you to effectively and accurately forecast training’s ROI to your company, measure against these predictions, establish indicators to track your progress (and make mid-course corrections if needed) and report the results in a language that business executives respond to and understand. This approach can be used for any sort of training program, in any setting, whether planned, newly implemented, or long-established.  Predictive Evaluation is collaborative, as supervisors and employees work together to establish standards for success each step of the way. The process helps guarantee that training results will be relevant to the business and gives all participants a sense of ownership in the process.  
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:48am</span>
Please join me on Thursday, June 13th at 2:00 PM Eastern for a complimentary webinar Predicting Training’s ROI Do you struggle to define training’s success? Are you fighting to justify the training’s value within your organization? Does your organization view training as an expense versus an investment with predicted return? Do you need a method of predicting (forecasting) the training’s value to help decide whether to train? Are your current evaluation efforts always "after the fact"? Do you want to measure success using leading indicators that drive continuous improvement? In this complimentary webinar, training and evaluation expert Dave Basarab will spotlight Predictive Evaluation. Using Dave’s innovative new Predictive Evaluation (PE) Model, trainers and business leaders can now successfully predict training’s ROI allowing them to make smarter, more strategic training and evaluation investments.  Dave, who (literally) wrote the book on Predictive Evaluation, regularly helps his clients maximize their training ROI by as much as 200 to 300%. In his webinar, he will explain how to accomplish this. Webinar attendees will learn how Predictive Evaluation enables you to effectively and accurately forecast training’s ROI to your company, measure against these predictions, establish indicators to track your progress (and make mid-course corrections if needed) and report the results in a language that business executives respond to and understand. This approach can be used for any sort of training program, in any setting, whether planned, newly implemented, or long-established. Predictive Evaluation is collaborative, as supervisors and employees work together to establish standards for success each step of the way. The process helps guarantee that training results will be relevant to the business and gives all participants a sense of ownership in the process. Space is limited.  Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/478152048 Title: Predicting Training’s ROI Date: Thursday, June 13, 2013 Time: 2:00 PM - 3:00 PM EDT After registering you will receive a confirmation email containing information about joining the Webinar.
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:47am</span>
Please join me on Thursday, July 11, 2013 2:00-3:00 PM EST for this complimentary webinar on my Learning to Performance model. When companies diagnose performance gaps, training is often the solution. Most organizations - and trainers - rely on isolated training events, but Dave Basarab now offers his innovative Learning to Performance system, a complete training approach that combines learning, leadership and change management competencies to produce documented, sustainable results and value. Dave’s Learning to Performance system is a complete, customized, end-to-end approach. His unique, multi-step process includes: pre-training strategy, planning, design and development, training, and post-training transfer, application and support. The Learning to Performance methodology works with any content - time management, sales, project management - for organizations in any industry. In this dynamic presentation, he will demonstrate how this unique recipe is the key for successful training, transfer, and impact. In this session, attendees will learn how to: Implement the Learning to Performance model. Develop an Impact Map. Integrate instructional design & course development into the learning to performance model. Redesign work process & tools that participants will use when applying their new/enhanced skills. Prepare the management team to ensure successful training transfer. Run classes utilizing the plan of action approach. Implement post-program organizational support by the management team. Launch effective training transfer techniques. Instill post-program application support by trainers.  
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:46am</span>
Last week I was honored to to have another article published in Chief Learning Officer.  In it I discuss and give pragmatic tips on how to get leaders involved in training efforts.  You can read the article here. I believe I have only touched the surface on how we get leaders involved.  How have you tackled this problem?
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:46am</span>
Impact Evaluation Evaluation is identifying the direct impact on - and value to - the business that can be traced to training and is part of my Predictive Evaluation model. It assesses in quantifiable terms the value of the training by assessing which Adoptive Behaviors have made a measurable difference. Impact evaluation goes beyond assessing the degree to which participants are using what was learned; it provides a reliable and valid measure of the results of the training to the organization. When participants report a positive impact from training, this approach allows you to articulate how and why training is impacting the business, leveraging this information to enhance the organizational impact of future deliveries. Conversely, when little or no impact is found, this evaluation method uncovers why and determines what can be done to remedy that undesirable outcome. Impact evaluation seeks value to the business. This is done by collecting data on actual business results that participants attribute to successful adoption of their Intention Goal(s). In their Adoption Evaluation survey, participants told us what they did, and we classified them as Successful Adoption or Unsuccessful Adoption. From the successful participants, collect additional impact data via three methods: Completion of an Impact Survey, Interviewing participants, and Examination of company records to confirm findings.  The Impact Hunt It is impractical to follow every participant and determine impact.  So I use sampling with a subset of participants to estimate the impact of the whole population (all participants.) I refer to this as the "Impact Hunt." The method is: Start with all participants and survey them via the Adoption evaluation survey technique. Narrow the potential pool of impact analysis participants to only those who were judged as successfully adopted. Send everyone who is successfully adopted a detailed impact you valuation survey. Using the Impact survey data, identify people who have the highest likelihood or self-reporting impact. This becomes your Impact pool. From this pool randomly sample participants to conducting in-depth impact interview with and evaluate their results. Graphically the Impact Evaluation Process is: As always, please send me your thoughts on this method.  Next blog: Develop the Impact Survey.  
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:45am</span>
Adoption Evaluation, as an output, sorts participants into either Successful Adoption or Unsuccessful Adoption. Participants labeled Successful needed to meet two criteria: (1) self-reporting successful implementation of their Intention Goal and (2) what they did matches an Adoptive Behavior on the Impact Matrix. These are the participants most likely to have results, and you want to more information from them on that impact. You do this via an Impact Survey. The Impact Survey is like the scorecard of results. From the participants, you want it to capture the following: Details on the performance: what they did, what tools and techniques they used Results realized from that performance: what impact has occurred (cost savings, higher production, less defects, increased sales, etc.) Where claimed results can be validated Percentage by which training provided the impact Whether the impact is sustainable and repeatable Percentage by which other factors (external or internal to the company) contributed to the impact The survey needs to collect enough data from participants so that you can analyze the results and compare them to the predicted impact (found in the training’s Impact Matrix). A good starting point is to review the Impact Matrix and use it as your guide for the survey. SAMPLE PREDICTIVE EVALUATION IMPACT SURVEY As always, please send me your thoughts on this method.  Next blog: Collect Detailed Impact Data from Successfully Adopted Participants.  
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:45am</span>
Prior to sending the survey, it is a good idea to build the data analysis model that used to summarize the data. It usually takes the form of a spreadsheet or database. In a spreadsheet, the data structure in each row is one participant response with the columns being the data. When to collect data is a factor of the number of course deliveries and volume of participants trained. Other guidelines on the timing or frequency of data collection include the following: Intention Evaluation: collect at the conclusion of each course. For classroom courses, collect a goal sheet from each participant. For e-learning courses, embed the goal sheet into the end-of-course programming and submit it to the data analysis model. Adoption Evaluation: collect data when a sufficient number of participants have graduated and have had the opportunity to transfer their skills to the workplace. If you have a sufficient population size, you do not have to collect adoption data from all participants. Collect enough data to ensure a reasonable return rate and sample to obtain a high degree of confidence. Impact Evaluation: collect data immediately after the Adoption data have been analyzed and reported. Survey Administration Draft an email from a company executive soliciting survey completion and watch the response rate (the ratio of number of people who answered the survey divided by the number of people in the sample, usually expressed in the form of a percentage). A reminder email is sent requesting completion of the survey. You want to get the highest return rate as possible, because obtaining a high response rate can bolster statistical power, reduce sampling error, and enhance the generalizability of the results to the population (participants). Data Scrubbing When collection has ceased, you need to scrub the Impact data. Data scrubbing, sometimes, called data cleansing, is the process of detecting and removing or correcting any information in a database (or spreadsheet) that has some sort of error. This error can be because the data are wrong, incomplete, formatted incorrectly, or are a duplicate copy of another entry (the participant responded multiple times). Simply review the data and make the necessary corrections. As always, please send me your thoughts on this method.  Next blog: Analyze Impact Survey Data. Previous Blogs in the Impact Evaluation Series The Impact Hunt Develop Impact Survey  
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:44am</span>
Many times evaluators struggle with the balance between using quantitative and qualitative data in training evaluations. I like to use both. However, the element that determines what type of data I use always comes back to the purpose of the evaluation and the questions that evaluation will answer. These two things determine what types of data are needed and how to collect it. It is rare that I do not use both data types. I view quantitative data as the "business end" of the evaluation. Senior executives are used to see data in charts and graphs and like Impact Data in those forms. I use an Impact Dashboard exclusively for these types of data. I always use qualitative data with Impact Evaluation - I view these as the "heart and soul" of the Impact and have found Seniors Executives gravitate more to these than the charts and graphs. What criteria do you use to determine what type of data are used? Quantitative Data Analysis - Relative Comparison Qualitative Data Analysis Read through the data Raw data are ordered and organized so that useful information can be extracted from them. What data do and do not contain. Make notes of the patterns and themes that emerge from the data. Answer these questions: What are the data telling you?  What is missing?  What are the central themes contained in the data?  Are you uncovering something unexpected?  What is it?  Is it pertinent to the evaluation? Quantitative Data Analysis - Measures of Central Tendency and Variability Quantitative Data Analysis - Frequency Distribution Analyzing Qualitative Data Qualitative data analysis involves the identification, examination, and interpretation of patterns and themes in Impact data and determines how these patterns and themes help answer the question of how much impact has been realized. It is important to note that qualitative data analysis is an ongoing, fluid, and cyclical process that happens throughout the data collection stage of your evaluation project and carries over to the data entry and analysis stages. Although the steps listed below are somewhat sequential they do not always (and sometimes should not) happen in isolation of each other. Drawing Conclusions to the Entire Population For each Adoptive Behavior Look at your data and answer this question: What percentage of the sample has successfully adopted? For example, if 50 percent of the sample has been coded as Successful Adoption, you can draw the conclusion that 50 percent of all participants (the population) have successfully adopted. So if 600 participants is the population, 300 would be assumed as Successful Adoption. Next answer: What statistic best describes the results of the sample? Mean, median, or mode? For example, if you judge that the $5,000 for an Adoptive Behavior is representative, choose that value. You then can conclude that the mean value applies to the population (all participants). Calculate the total impact for the Adoptive Behavior by multiplying the statistic chosen (in this example, the $5,000 mean) times the Successful Adoption participants from the population (in this example, 300). Therefore, the total impact for this Adoptive Behavior is $5,000 × 300 = $1,500,000. Note: For a sample to be used as a guide to an entire population, it is important that it be truly a representative of that overall population. Representative sampling assured, inferences and conclusions can be safely extended from the sample to the population as a whole. A major problem lies in determining the extent to which the sample chosen is actually representative. Statistics offers methods to estimate and correct for any random trending within the sample and data collection procedures. There are also methods for designing experiments that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population. Statisticians describe stronger methods as more "robust." If you have concerns about this, seek assistance from a statistician. Answering the Impact Questions Answer two questions and then (validate your initial findings later, with participant interviews and investigating company records). What impact has the company received from training? Sum all the value being reported and discount by the external contribution factor. You may also view it as an average per participant if that is warranted. How does that impact compare to predicted impact? Look at what you have found and compare it to the course’s Impact Matrix.  Are the results ahead, behind, or on track with the predicted impact? Are you seeing unforeseen impact? Are participants doing things related to training and producing impact that was not predicted? Summary Interpretation is the process of giving meaning to the result of the analysis and deciding the significance and implications of what the data show. Answering the evaluation question means interpreting the analyzed data (quantitative and qualitative) to see if results support or do not support the answers to the evaluation questions. Previous Blogs in the Impact Evaluation Series The Impact Hunt Develop Impact Survey Collect Detailed Impact Data from Successfully Adopted Participants
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:44am</span>
Join me for a free webinar on August 8th 2-3 PM ET.  In this complimentary webinar, you’ll meet training and evaluation expert Dave Basarab, inventor of the unique Learning Burst training method, which gives participants the opportunity to learn in short bursts anywhere at any time. REGISTER Each Learning Burst covers one topic in a "mini-course" that consists of an 8 to 10-minute audio cast and a PDF workbook that includes relevant simulations, quizzes, interactive exercises, and case studies. To date, thousands of employees from Fortune 500 companies have been taking Learning Bursts as a replacement - or supplement - to traditional classroom training. Participate in this webinar to find out why your company should leverage this cutting-edge training approach, as well. In this session, participants will learn: Determine courses that are candidates for the Learning Burst method. Conduct a Learning Burst needs assessment. Elements in the instructional design unique to the Learning Burst method. How to create a Learning Burst audio segment. The elements in a Learning Burst workbook. How to successfully launch a Learning Burst course. Title: Learning Bursts: A Different Way to Deliver Training Date: Thursday, August 8, 2013 Time: 2:00 PM - 3:00 PM EDT
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:43am</span>
One element of data collection for an Impact Evaluation involves interviews with participants in order to find examples of the best results and to identify those employees who have no or very little impact. You need only interview a subset of the graduates in order to generalize conclusions to the entire population. From Adoption Evaluation, review the list of participants that you classified as Unsuccessful Adoption: this is the list of potential individuals to interview. Randomly select a sample from both groups. You interview enough people in the High-Impact category to find approximately ten best examples of impact. This may involve interviewing fifteen to twenty graduates or more. Note: the more examples found, the stronger is the correlation from this sample to the entire population. However, these numbers are based on the sample size (the number of graduates). With fifty graduates, it is likely that ten to fifteen are in the High-Impact category, which requires a lower number of interviews. The critical factors affecting interviewing are the time and the budget to conduct the interviews. The more interviews that you can conduct the better, but it needs to be within time and budget constraints. For estimation purposes, a single High-Impact interview (conducting the interview and summarizing the result) takes approximately one hour. Impact Interview Protocol Follow the protocol below when conducting Impact interviews: Inviting High Impact Participants to the Interview Invite participants to the Impact interview via email.  A sample email is: Inviting Low/No Impact Participants to the Interview High Impact Interview Template Low/No Impact Interview Template Previous Blogs in the Impact Evaluation Series The Impact Hunt Develop Impact Survey Collect Detailed Impact Data from Successfully Adopted Participants Analyze Impact Survey Data
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:42am</span>
Conducting an Effective Impact Evaluation Interview Here are some tips for conducting an effective Impact Evaluation interview: Listen effectively. Be a good listener; know how to redirect a conversation to get the critical information as quickly as possible. Listen without judging. Prepare. Review the employee’s Adoption and Impact Survey data prior to the interview. Ease on in. Start with easy to respond to questions—just follow the interview protocol. Set the tone. Let the interviewee know how the data will be used and that everything said will be held in strictest confidence. A standard phrase is this: "Mary, I am an external consultant who has been hired by the company to evaluate the effectiveness of the XYZ Training Course. You have been selected from the people who completed the course survey last week. I am interested in how you have applied what you learned in class and what impact it has had on you, your team, or the company. What you tell me will be held in strictest confidence. I may, however, include your story in a summary report without your name. Are you okay to proceed?" Also, express your appreciation for the time that participants have taken to talk to you. Explain how the interview will proceed and then try to follow that format as closely as possible. Use the script. Follow the interview protocol. Listen to your instincts. As you hear things regarding Adoption and Impact, continue to probe until you feel you have uncovered everything. Know what you want. If you’ve set aside thirty minutes for the interview, do your best to stick with that schedule. But be prepared to cut the interview short and jump to the concluding questions. Don’t waste your time or the interviewee’s if you are not talking about Impact data. The most common things that interviewees bring up that side-track you are discussing how well they liked the class, complaining about the company in general, etc. Write it down. Forget about remembering everything that transpires during an interview. You’ll want to take notes so that you can review at a later time. This is especially important if you’re interviewing many people. Probe. As you uncover areas that you feel are promising, investigate further by asking probing questions. Some common probing questions are as follows: Tell me more about that.  When you did that, what was the result?  How did you do that? What did you do specifically?  What was the result?  You mentioned [insert his/her result]; can you place a metric on that result? What is the metric? Create Impact Profiles Impact profiles show in a short, salient manner the profile of successfully adoption that has significant impact.  To create these profiles: Print the interview summaries for your review. Read each one, and sort them into two piles (the ones that are the best examples of Impact as aligned with the Impact Matrix and those that are not as powerful). Use the following items as criteria for placing an interview summary in the Impact Profile stack: Direct impact on and value to the business can be traced to training via the participant’s Intention Goals and Adoptive Behavior. Adoption of training has made a measurable difference. Results validated in the interview support the data from the Impact Survey.Results are in alignment with the predicted Impact identified in the Impact Matrix. The interviews that pass the preceding test become your Impact Profiles—examples of the best results that the training has produced. These validate the Impact data reported in the Impact Survey and are further proof of Impact created.   The Impact Profile is a one-page synopsis of the participant’s story. Review these profiles, and answer these questions: Do the profiles support the answers to the questions you determined from the Impact Survey data analysis? Those questions are the following: What impact has the company received from training? How does that impact compare to predicted impact? Sample Impact Profile Previous Blogs in the Impact Evaluation Series The Impact Hunt Develop Impact Survey Collect Detailed Impact Data from Successfully Adopted Participants Analyze Impact Survey Data Interview Successful and Unsuccessful Participants (part 1 of 2)
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:41am</span>
You should now have a clear picture as to the impact that the training is providing and whether or not it is as predicted. An issue that exists with the current impact data is that it is self-reported via the Impact Survey and somewhat validated with the Impact Interviews. This is good and, many times, sufficient for you to produce the Impact Dashboard and draw your conclusions on the value of training. If you choose, you can further build your case by collecting data from existing company records. This type of data is known as extant data. Extant data are existing data, information, and observations that have been collected or that are available for use in the assessment and evaluation processes. Basically, you analyze records and files that the company already has, rather than set out to gather additional information. Where and what files you look at depend on the impact being claimed by participants (e.g., sales figures, attendance figures, call-backs for repair, etc.). Customer e-mails, tech support log files, marketing and business plans, business requirements, documents, and vision statements are just a few of the types of information that help you validate the impact. The quickest way to gather extant data is to ask High-Impact interview participants to provide the documentation that proves their case (assuming they have permission to share it with you). This typically comes in the form of presentations, spreadsheets, sales records, financial statements, manufacturing reports, time and motion studies, etc. When you receive the documentation, review it and ensure that you have sufficient information to validate claimed impact. At times, the participant no longer has the backup documentation or it simply does not exist. This does not mean that you do not include the impact in your Impact Dashboard, but you may need to add a disclaimer to the report stating so. Bringing Meaning to Impact Results It is now time to bring meaning to the Impact results that you have collected.  Do this by: Analyzing results and draw conclusions Is the training program contributing to achievement of the business outcomes? Collect training expense data Compare results & ROI to predictions in the Impact Matrix Previous Blogs in the Impact Evaluation Series The Impact Hunt Develop Impact Survey Collect Detailed Impact Data from Successfully Adopted Participants Analyze Impact Survey Data Interview Successful and Unsuccessful Participants (part 1 of 2) Interview Successful and Unsuccessful Participants (part 2 of 2)
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:40am</span>
Title: Training Transfer: the Secret Sauce for Making it Stick Date: Thursday, September 12, 2013 Time: 2:00 PM - 3:00 PM EDT I believe that learning is continuous and shouldn’t stop when a training session ends. This approach is a total immersion learning experience that emphasizes training transfer, which drives adoption, which drives impact (company value). We need  to think beyond training sessions to ensure the information "sticks" and will be used effectively in the workplace. This interactive conversation demonstrates how to incorporate training and development lessons into "real world" situations. In this complimentary webinar, I will explain how to integrate training into your teams’ work behavior and implement activities and techniques that will increase your company’s training transfer rate to 60-70% or more. Additionally, I will explain how to use innovative techniques throughout the entire process - from pre-course, training, and post-course - to successfully drive training transfer. In this session, participants will learn: What is "training transfer" and why is it important? More than 20 specific actions to do before, during and after training to improve the "stickiness" of their training. How to partner with management for training’s roll-out, so more skills are transferred to the workplace. The importance of planning for training transfer in the earliest set-up phases of your training - and how this increases training’s ROI. How to cultivate a "training transfer mindset and culture" in your company. Title: Training Transfer: the Secret Sauce for Making it Stick Date: Thursday, September 12, 2013 Time: 2:00 PM - 3:00 PM EDT Reserve your seat at: https://www1.gotomeeting.com/register/264748329
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:39am</span>
An Impact Dashboard summarizes the Impact found.  A typical Impact Dashboard is made up of four pages/sections: (1) payback, results, and key findings; (2) impact by Adoptive Behavior; (3) impact examples and recommendations; and (4) the Impact Profiles. Sample Impact Dashboard - page 1 Sample Impact Dashboard - page 2 Sample Impact Dashboard - page 3 A Different Impact Dashboard Impact Evaluation Summary An Impact Evaluation addresses the following questions: What business results can be traced to the goal adoption of participants? Are results as predicted to the business? What is the typical profile of a participant who has achieved results? What additional business results (value) could be achieved if graduates who have little/no adoption were to adopt their goals? The reason to train should be to improve the company’s bottom line. By measuring the effect on the company profits from participant intentions, adoption, and impact and comparing that with the predicted value, a company can determine its return on investment and can maximize organizational and business results. Previous Blogs in the Impact Evaluation Series The Impact Hunt Develop Impact Survey Collect Detailed Impact Data from Successfully Adopted Participants Analyze Impact Survey Data Interview Successful and Unsuccessful Participants (part 1 of 2) Interview Successful and Unsuccessful Participants (part 2 of 2) Collect Impact Data from Company Records
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:39am</span>
A training evaluation is successful when the needs of the stakeholders have been met. A stakeholder is anybody directly or indirectly impacted by the evaluation. As a first step, it is important to identify the stakeholders for your evaluation. It is not always easy to identify the stakeholders of an evaluation, particularly those impacted indirectly. Examples of stakeholders for training evaluation are as follows: The Head of the Training and Development organization Key executives from various business units or staff functions (note: these may be represented by a Steering Committee or other such body) Instructional designers, course developers, and instructors (internal or external) External training suppliers (to a lesser extent) Once you understand who the stakeholders are, the next step is to establish their needs. The best way to do this is by conducting stakeholder interviews. The questions that you ask stakeholders can depend on (1) who you are talking to and (2) how they fit into the overall picture. When formulating stakeholder questions, ask yourself the following: What do you want to find out? How many people can you interview? What will you do with the data to help guide the evaluation? What is the underlying reason for asking a specific question? Some questions that you may wish to use with the stakeholders are: What business issue(s) is this training going to help in solving? Why does this issue exist? Why do you want an evaluation? What kind of information would you like to see from the evaluation? What decisions will you make based on this evaluation information? What is your timetable for getting evaluation information? How would you like to see it and receive it? Who else should I talk to? Take time during the interviews to draw out the true needs that create real benefits. Often stakeholders talk about needs that aren’t relevant and don’t deliver benefits. These can be recorded and set as a low priority.  Once you have conducted all the stakeholder interviews and have a comprehensive list of needs—is to prioritize them. From the prioritized list, create a set of questions the evaluation will answer. Previous posts in this series: Getting Started Coming up in this series of posts: Elements in an Evaluation Plan Evaluation deliverables Evaluation schedule Performing the evaluation Training evaluation and continuous improvement Training evaluation organizational readiness As always, I would enjoy hearing your thoughts on this topic.
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:39am</span>
In addition to the evaluation questions, other elements of the evaluation plan arise from stakeholder interviews and your research into the course. Each element that you should consider as part of the plan is shown below. Course to be evaluated.  List the title and purpose for the course. Provide a history evaluated (business reason) for the course and what it is intended to achieve. For new courses (ones yet to be developed), these data can be found in the needs analysis and/or instructional design documents. For existing courses,use the current training materials. Purpose of the evaluation.  Documents why the evaluation is being undertaken and how the information will be used (comes from stakeholder requirements). Example: Our approach is based on moving away from the typical measurement of activity or level-based measures in favor of focusing on value-driven continuous improvement efforts.This ensures that the training investment creates value and produces the behaviors and results that lead to the desired business outcomes. Questions the evaluation will answer. These are the basic building blocks for the training evaluation. They are the questions that key stakeholders would like to see answered. Examples: Are participant intentions and beliefs upon program completion aligned with desired values? Are participants using (adopting) the skills on the job? Is the training program contributing to achievement of the business outcomes? If yes, to what degree? If no, why not? Key Stakeholders.  These are the individuals who have a vested interest in the evaluation information. Once this is completed, it’s time to move on and look at the evaluation deliverables (next week’s post). Previous posts in this series: Getting Started Questions Evaluation will Answer Coming up in this series of posts: Evaluation deliverables Evaluation schedule Performing the evaluation Training evaluation and continuous improvement Training evaluation organizational readiness As always, I would enjoy hearing your thoughts on this topic.
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:38am</span>
Evaluation Deliverables Using the evaluation questions and other planning elements you have defined earlier, create a list of information that the evaluation needs to deliver in order to answer the evaluation questions. Specify when and how each item must be delivered. These are the elements of the evaluation. Add the deliverables to the evaluation plan, with an estimated delivery date. More accurate delivery dates can be established during the scheduling phase, which is next. The Evaluation Schedule Create a list of tasks that need to be carried out for each deliverable. For each task, identify the following: The amount of effort (hours or days) required to complete the task The resource that will carry out the task Once you have established the amount of effort for each task, you can work out the effort required for each deliverable and determine an accurate delivery date. Update your deliverables section with the more accurate delivery dates.  A common problem discovered at this point occurs when an evaluation has an imposed delivery deadline from the sponsor or steering committee that is not realistic based on your estimates. If you discover that this is the case, you must contact the sponsor/committee members immediately. The options you have in this situation are the following: Renegotiate the due dates Employ additional resources Reduce the scope of the evaluation Congratulations! Having followed all the preceding steps, you should have a good evaluation plan. Remember to update your plan as the evaluation progresses and measure progress against the plan. Previous posts in this series Getting Started Questions Evaluation will Answer Elements in an Evaluation Plan Coming up in this series of posts: Performing the evaluation Training evaluation and continuous improvement Training evaluation organizational readiness As always, I would enjoy hearing your thoughts on this topic.
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:38am</span>
I am honored to have an new article just published in In the article I discuss the elements of my Predictive Evaluation model.  Watch this short video about the PE model. To learn more about the model, take a look at my latest book:
Dave Basarab   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 28, 2015 11:37am</span>
Displaying 27985 - 28008 of 43689 total records