Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

I'd like to announce the winners of the 2007 Neon Elephant Award, given this year to Sharon Shrock and Bill Coscarelli for advocating against the use of memorization-level questions in learning measurement and for the use of authentic assessment items, including scenario-based questions, simulations, and real-world skills tests. The Neon Elephant Award is awarded to a person, team, or organization exemplifying enlightenment, integrity, and innovation in the field of workplace learning and performance. Announced on the day of the winter solstice—the day of the year when the northern hemisphere turns away from darkness toward the light and hope of warmer days to come—the Neon Elephant Award honors those who have truly changed the way we think about the practice of learning and performance improvement. Award winners are selected for demonstrated success in pushing the field forward in significant paradigm-altering ways while maintaining the highest standards of ethics and professionalism. See the full announcement by clicking here...
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:57pm</span>
The eLearning Guild has asked me to lead a discussion at their upcoming conference (Guild Annual Gathering) on how we might think about evaluating Learning 2.0 interventions. I'd welcome your examples and insights. For those who don't know what "Learning 2.0" means, I'll forgo my cynical answer, and say that others describe Learning 2.0 as learning that enables learner-creation of information, comparing Learning 2.0 to the stereotypic traditional model where the teacher teaches and the learner absorbs the information (or the e-learning delivers content and the learner absorbs). So, Learning 2.0 is said to include such things as Wikis, Blogs, Learner Portfolios, Media Development and Sharing by Learners, Informal Learning, etc. Here's a few things I'm contemplating: Traditional metrics are certainly appropriate because bottom-line we want to know if Learning 2.0 interventions produce learning, enable on-the-job performance, and produce desirable individual and organizational outcomes. Comparisons to other methods of learning. Especially important to see if Learning 2.0 methods (on the positive side) create more elaborate mental models, produce more satisfaction, etc. and (on the negative side) waste time, create unproductive distractions, communicate incorrect or inappropriate information. We need to measure not only what HAS been learned, but also on what MAY BE LEARNED IN THE FUTURE. It could be, for instance, that Learning 2.0 is inefficient for learning anything specifically, but enables faster future learning in the same area of inquiry. Here's where I can use your help. Let me know if you know of any of the following: Rigorous research studies on Learning 2.0 interventions Anecdotal evidence on Learning 2.0 interventions Better yet, join me at the eLearning Guild's Annual Conference -- Specifically the Learning Management Colloquim and discuss this in real time.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:57pm</span>
MIT researchers have developed a technology to track people's social interactions, for example, at a conference. Check out this link to learn more. Can we use such a technology for learning? Certainly, we could use the technology to help people learn about their current networking tendencies and to give learners feedback as they attempt to change those tendencies. But, what other applications can we brainstorm? Let me give this a try. Leadership Simulations: Does the technology enable better in-basket simulations, or in-basket simulations that are more economical to deploy (because they don't require the same high numbers of observer/consultants to observe interactions and provide feedback? Note: By in-basket simulations, I mean simulations in which many learner/players each play a different role, each have different in-basket tasks to accomplish, and the way they act in the simulation is by talking with other learner/players. On-the-job Leadership Activity Feedback: Imagine a retail store manager who is tasked (partially) with developing his or her people (those who work in the store). The system could track the number of interactions the store manager had with each employee, and the interactions the employees had with each other. This "intelligence" data could be used by a store manager to learn about the number of learning opportunities (i.e., coaching, providing feedback, observing, encouraging, sharing, etc.) that occur in a given period of time. Such data could be compared with "best-practice" store manager data, and store managers could use this information to change their behavior. Admittedly, quantity doesn't equate to quality, but by tracking such social contact, managers might get a start in thinking about increasing the number of "learning opportunities." Organizational Learning. Organizations (or business units, teams, etc.) could track each other's social networks to find out who the most networked folks are. Such information could be utilized to select for job assignments, project roles, etc., or to actively change the observed dynamics (for example, encouraging some people to spend more time in individual productive work while encouraging others to limit their isolation). Anyway, these are some initial thoughts. I expect some enlightened simulation companies to begin brainstorming ways to use the technology to differentiate their offerings from the competition. In the meantime, can you think of any other learning opportunities inherent in the technology?
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:57pm</span>
Starbucks is shutting all its stores on February 26th for three hours to train or retrain its employees. You can read related articles at the following links:Seattle Times Starbuck Gossip (Blog) Marketplace (Audio, story starts at 19:42 mins:secs) Street InsiderThe question will be, is the training well-designed? Wednesday morning February 27th will give only a partial answer. March, April, and May will be more important. And of course, maybe this has nothing to do with training at all. Maybe it's a store management problem. Maybe the brand is too diluted, no longer special. Maybe competition from Dunkin' Donuts and McDonalds is creating issues, especially as consumers try to save money in these uncertain economic times. Will the training work? Stay tuned.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:57pm</span>
Elliott Masie came up with a great and very insightful wish list for LMS's. Click here to access it. He even added a few suggestions in the past few days, probably based on feedback from his loyal audience. I really like the richness that Elliott's suggestions might create for a typical LMS. Most LMS implementations are just a list of course offerings. On the other hand, I worry about overly complicating options for users. Most workers just don't have extra time to waste. Maybe the suggestion to let users rate the courses comes into play here. I also worry about user-generated content. It can be great, could be better than what the training folks can create, could engender more engagement, could be bottom line more effective. But we should all recognize that it is a double-edge sword. User generated content could be incorrect, could be a huge waste of time, could cause the organization to leave itself vulnerable to legal liability. Doesn't Fix the Biggest Problem with the LMS Mentality The biggest problem with LMS's can't be fixed with Elliott's suggestions. The biggest problem is that the whole LMS face sends a powerful hidden message that "learning" is about taking courses or accessing other learning events. This "Learning Means Sitting" LMS mentality infiltrates whole organizations. I've seen this recently with one of my clients, a huge retailer, where their LMS has encouraged store managers and other store leaders to focus learning time on taking courses, in lieu of coaching, learning from each other, trying things out and getting feedback, encouraging store employees to take responsibility for particular areas, etc. It's not that they completely ignore these other learning opportunities; it's that the LMS focuses everyones' time and attention on courses, creating a lot of wasted effort. To get the most from an LMS, you ought to throw away your LMS and start over. People can learn something—develop competencies/skills—from courses or from other means. A competency-management system that offers multiple means to develop oneself is ideal, where courses/events are just one option. I still haven't seen a commercial system that does this though...Most are course first designs. Maybe I'm too over-the-top recommending that we get rid of all LMS's. I make the statement to highlight the humongous problems that the LMS mentality is causing.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:57pm</span>
I recently completed one of the most comprehensive work-learning audits I've ever been asked to do for a major U. S. retailer. The goal of the audit was to find out how their learning programs AND work-learning environment were supporting the stores in being successful. The audit involved (1) structured and unstructured interviewing with all levels of the organization, especially with store personnel, (2) focus groups held across the country with specific groups of store personnel (e.g., clerk, store managers, assistant managers, etc.), (3) task force meetings with senior line managers and representatives throughout the company, (4) learning audits of e-learning courses, (5) learning audits of a mission-critical classroom course, (6) review of company artifacts (CEO messages, publications, databases, intranet, etc.), (7) interviews with learning-and-performance professionals, (8) discussions of business strategy, (9) discussions regarding corporate information and data-gathering capabilities, (10) job shadowing, (11) store observations, etc. Who/What Do Workers Learn From? One of the most intriguing results came out of a relatively simple exercise I did with focus-group participants. The following is a rough approximation of those results. What I did was ask focus-group participants who they learned from. I would hold up a large 6 x 8 index card with a position label on it, for example, "District Manager," "Clerks," or "Corporate." The group would shout out where they thought that card should go on a large diagram I had created on the wall. I would place it on the wall in a particular category based on the verbal responses and then we would negotiate as a group to determine it's final positioning. So for example, participants could say that they learned the following amounts from that person/position, and we often compromised using in-between placement: Learned Most Learned a Lot Learned Some Learned a Little Learned Least Had Little/No Contact with See the diagram below for a rough example. This one is actually a composite based on several focus groups and more than one position. It gives a fair representation for how frontline retail clerks responded. Note that the orange boxes represent fellow employees, while the blue boxes represent other groups of people or things that they learned from. There are several key insights from these results: People learn the most from those who they work closely with. People learn the most from their experience doing the job. People learn the most from their self-initiated efforts at learning. The more contact, the more learning (for the most part), however there are benefits from learning from experts (e.g., store managers, head clerks), though the worker has to have at least some signicant contact with them to create this benefit. You'll notice that district staff have only a little impact and regional and corporate staff have none. E-learning is seen as somewhat facilitative but not a place where workers learn the most. This result may be organization specific as different e-learning designs and implementations might easily move this result higher or lower. Frontline clerks didn't get much from company magazines and the like, but managers (not represented in the results above) did find value in these. Store managers also reported that networking with other store managers was on of the "Learn Most" entries for them. For this company, this network was even more important than learning from their district managers (their direct bosses). This makes sense because their network is more accessible throughout the heat of the daily grind. These results were eye-opening for my client, and they are still wrangling with the implications. For example, district managers and district training staff seemed to produce very little learning benefits. So, should their roles for learning be de-emphasized or re-emphasized? These types of result have to be understood in the larger data-gathering effort of course. Analyzed alone, they suffer from the problem of de-contextualized self-report data. Combined with multiple other data sources, they paint a really robust picture of an organization's learning environment. Informal Learning, Social Networks, etc. Vendors are out and about in our field now selling the benefits of complicated and expensive analysis tools for looking at how people learn through so-called informal on-the-job mechanisms. The example above shows that if you don't have the big bucks, there are simpler ways to get good data as well. 
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:57pm</span>
I'm writing you from the eLearning Guild's annual conference. I went to a session presented by Silke Fleischer and colleagues at Adobe and was blown away by the work Adobe is doing to create products that support learning and related efforts. I then asked a number of industry thought leaders who confirmed my interpretation: Adobe is now a 500-pound Gorilla, likely to continue out-investing their competitors and thus creating better and better products for folks like us to use. If you're considering elearning tools, you owe it to your organization to consider Adobe products. I have no financial relationship with Adobe, by the way. This is not to say that other products aren't worthy and/or do some things better than Adobe products. My thinking is this: Companies who invest in their products are often more likely to be there for you in the years to come. I've seen many clients who started using a particular tool five-to-ten years ago, and they are basically stuck with it because of their large installed base of learning courses. Here are a few of the things that made me wake up and take notice: Adobe's update cycle on Captivate seems to be shrinking, as they are aggressively moving forward in the development of Captivate 4. Captivate is being used for many purposes, including the development of Podcasts, Advertising, etc. You can embed a working Captivate file into Adobe connect and then have webinar or online-learning participants each interact with Captivate objects. PDF files can now include fully-functional interactive images. So documents are not static anymore!! Adobe is working on a new platform called AIR, which will enable the compilation of many types of objects for display and interaction.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:57pm</span>
Call me crazy, but I think it's important to invest in the research base for our field. I've spent a good chunk of the last year reviewing research from the world's preeminent refereed journals in regard to how to give learners feedback. I've created, I'd like to think, the seminal research review on how to give learners feedback, written in a way that puts feedback in perspective, that goes deep into the fundamentals to give readers clear mental models for how feedback works. It's this kind of in-depth exploration that allows you as a learning professional to use your wisdom to make the difficult design tradeoffs that you have to make. Recipes are for short-order cooks. Research-based wisdom for learning professionals is much more useful in the gritty day-to-day of our learning shops. And now, instead of selling this document, I'm going to try an experiment, and give it away. Call me crazy, but internet users (hey that's us) just don't like to pay. I've been swimming upstream against the movement toward free information, knowing that the information I'm compiling is the best information out there, and that it takes an incredibly exhaustive effort to sift through refereed research, make sense of it, and repackage it in a way that resonates and is practical. But maybe research karma will work. It's worth a try, right? You can help me by reading the report, AND if you think it's good, sending the link to it to everyone you know in your organization, in every learning-development organization you know, to your mom, your kids, your elected officials, to Elliott Masie and Tony Bingham (CEO of ASTD), to the New York Times. Special thanks go out to my friends at Questionmark who agreed in advance of me finishing the report to license it for their clients and learning community. Questionmark is providing a great service by making it possible to dessimate world-class research-based information that is both valid and useful, including their support of the aforementioned research report on feedback. Here are some of the insights from the two-part 88-page research report: The most important thing to remember about feedback is that it is generally beneficial for learners. The second most important thing to remember about feedback is that it should be corrective. Typically, this means that feedback ought to specify what the correct answer is. When learners are still building understanding, however, this could also mean that learners might benefit from additional statements describing the "whys" and "wherefores." The third most important thing to remember about feedback is that it must be paid attention to in a manner that is conducive to learning. Feedback works by correcting errors, whether those errors are detected or hidden. Feedback works through two separate mechanisms: (a) supporting learners in correctly understanding concepts, and (b) supporting learners in retrieval. To help learners build understanding, feedback should diagnose learners’ incorrect mental models and specifically correct those misconceptions, thereby enabling additional correct retrieval practice opportunities. To prepare learners for future long-term retrieval and fluency, learners need practice in retrieving. For this purpose, retrieval practice is generally more important than feedback. Elaborative feedback may be more beneficial as learners build understanding, whereas brief feedback may be more beneficial as learners practice retrieval. Immediate feedback prevents subsequent confusion and limits the likelihood for continued inappropriate retrieval practice. Delayed feedback creates a beneficial spacing effect. When in doubt about the timing of feedback, you can (a) give immediate feedback and then a subsequent delayed retrieval opportunity, (b) delay feedback slightly, and/or (c) just be sure to give some kind of feedback. Feedback should usually be provided before learners get another chance to retrieve incorrectly again. Provide feedback on correct responses when:a. Learners experience difficulty in responding to questions or decisions.b. Learners respond correctly with less-than-high confidence.c. All the information learned is of critical importance.d. Learners are relatively new to the subject material.e. The concepts are very complex. Provide feedback on incorrect responses:a. Almost always.b. Except:i. When feedback would disrupt the learning event.ii. When it would be better to wait to provide feedback. When learners seek out and/or encounter relevant learning material either before or after feedback, this can modify the benefits of the feedback itself. When learners are working to support retrieval or fluency, short-circuiting their retrieval practice attempts by enabling them to access feedback in advance of retrieval can seriously hurt their learning results. When learners retrieve incorrectly and get subsequent well-designed feedback, they still have not retrieved successfully; so they need at least one additional opportunity to retrieve—preferably after a delay. On-the-job support from managers, mentors, coaches, learning administrators, or performance-support tools can be considered a potentially powerful form of feedback. Training follow-through software—that keeps track of learners’ implementation goals—provides another opportunity for feedback. Feedback can affect future learning by focusing learners on certain aspects of learning material at the expense of other aspects of learning material. Learners may take the hint from the feedback to guide their attention in subsequent learning efforts. Extra acknowledgements (when learners are correct) and extra handholding (when learners are wrong) are generally not effective (depending on the learners). In fact, when feedback encourages learners to think about how well they appear to be doing, future learning can suffer as learners aim to look good instead of working to build rich mental models of the learning concepts. Some of the concepts and language in the above recommendations may not be obvious until you actually read the research report. You can do that by clicking the link below. The link to download the feedback report
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:57pm</span>
I just spent an hour and forty minutes watching Thiagi on a YouTube video. That's a lot of time, but it was worth it. Thiagi (whose last name is too long to spell, and like Prince or Madonna it's not really necessary) is not only one of our fields' foremost personalities and presenters, but he is also a brilliantly creative instructional designer. I may not agree with each and every one of his assertions, but every time I hear him speak I am compelled to listen for wisdom that I myself don't yet own. I am challenged to new ways of thinking. I am humbled by his encyclopedic knowledge of learning-design methods and his ability to pair learning needs with that encyclopedia. If I was a learning executive with a tough learning-design problem, he would be one of the people I'd call to get useful original ideas on the issue. As you watch the video, you might want to skip forward to 0:05:40 to get through the introductory comments about the program at University of Maryland, Baltimore County.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:57pm</span>
I've decided to offer free webinars from time to time on very short subjects. I did my first one last week, and it went pretty well, except for a bit of operator error on my part. In these 30-minute sessions, I'm going to focus on some of the most important issues in learning design. Each session includes: Learning News (brief humorous take on the week's news) Stone Pebble Nugget(exploration of key issue in learning design) Ask Dr. Thalheimer(Ask me anything -- think free consulting)    Upcoming Online Sessions:(Click below to sign up)Friday June 6thFeedback on Correct Answers Friday June 20thMinimizing Forgetting      Previous Online Sessions:Friday May 30, 2008The Learning Landscape
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:56pm</span>
This is the first draft of a chapter from my forthcoming book (don't ask when, it's a labor of love), modified somewhat to avoid references to other parts of the book (to make it whole here). Your comments, criticisms, and good ideas will be gratefully explored. The Genesis of the Model It's helpful to have an overall understanding of what we're trying to do in the learning-and-performance profession. I offer the following model as a way to frame our discussion, as a way to provide a deep conceptual map of our world—at least the world to which we should be aspiring. I've been in the learning-and-performance field for almost a quarter century. I've been an instructional designer, a trainer, a university adjunct, a simulation architect, a project manager, a project manager, a business leader, a researcher, and a consultant. And yet, even recently, I have found myself wanting to build a better model of what we do. In the book, I'm going to offer several models that provide a good starting place for deep understanding. The first of these models I call, The Learning Landscape. I've been gradually building this model for years, and I recently added some additional complexity that completes the picture of what we do—or what we should aim to do. Of course, all models are simplifications in the interest of understanding and usability. Early versions of The Learning Landscape have resonated with my clients, and I think this latest version provides additional value. I'm going to unveil the model a piece at a time, adding complexity as this blog post progresses. The Model's Phases Look at the bottom of the following diagram. You'll notice three labels there. The learning landscape I'm describing is one in which we build a learning intervention to help our learners perform in their future performance situations (for example on the job) in an attempt to create certain beneficial learning outcomes. So for example, if we build a course to teach creative-thinking skills, we do it to help learners be more creative in their jobs and produce more innovations for their organizations. From Learning Intervention to Learning Outcomes  Look at the diagram below. It shows how a learning intervention creates performance that lead to results. In the learning intervention (box A), the learners learn—they build an understanding of the learning content. Later, in the performance situation (box C), the learner retrieves from memory the information that they learned. They also apply what they learned (box E). This successful retrieval and application enable the learner to get from the learning what they hoped to get (box F) and the organization gets the learning results it wanted to get (box G) from the investment it made. The diagram above shows the minimum requirements for a successful learning intervention. The learners have to learn (box A), retrieve (box C), and apply (box E) what they've learned in order to create beneficial learning outcomes. It would be pollyannaish for us to believe that this process always works as diagrammed above. When our learners fail to learn (box A), the whole process breaks down. You can't retrieve what you never learned and you can't apply what you can't retrieve. Even when our learners fully learn a topic, at a later time they may fail to retrieve what they learned (box C). People forget information. They may forget something permanently or they may suffer temporary or contextually-induced forgetting. Learners can also learn and retrieve successfully, but not apply what they've learned (box E). There are many reasons that learners fail to apply what they are able to retrieve. The learning intervention might not have been sufficiently motivating to prompt the learners to apply what they learned. The incentives in the performance situation may discourage application. The learners may not have time, resources, or other competencies that enable them to be successful. The diagram above is very helpful in understanding how our learning interventions create learning results. It highlights the obvious importance of the quality of the learning intervention itself. It also highlights the criticality of retrieval. Later we will explore in-depth how we can build learning interventions to specifically support retrieval. Finally, the diagram above highlights the importance of the performance context in supporting learners in applying what they learned. Later we will talk about how we can gain more influence in the performance situation. We'll also discuss how to enroll learners' managers to improve the likelihood of successful application. Adding Other Working-Memory Processing (besides retrieval of what was learned) The above diagram is missing a few key elements. While it nicely highlights the importance of creating retrieval, it doesn't account for other working-memory triggers. In the diagram below, I've added another box (box D "Learner Responds") to represent working-memory processing not directly related to what was previously learned in our original learning intervention. While working-memory "retrieval" and "responding" are overlapping and often interdependent processes, I wanted to distinguish between the retrieval of information from the learning intervention and responding generated by other means, for example job aids, performance support, management prompting, and other guidance mechanisms. While often box C and box D work in concert (as when a job aid provides guidance and also supports retrieval of what was previously learned), we need to remember that we can get our learners thinking productive thoughts without necessarily relying on course-like learning interventions. To reiterate, there are two ways to trigger working-memory processing related to our learning efforts. First, retrieval cues can trigger our learners to remember what they've learned in the original learning intervention. Second, other triggers (i.e., performance-support tools) can also stimulate working-memory responding. As you probably know, sometimes it is more effective to rely on retrieval, while other times it is more effective to rely on performance support interventions. Sometimes it is better to utilize both in concert. The Full Model  We also should recognize that learners can learn in their performance situations. The full model below adds this performance-situation learning, what some people call informal learning. Performance-situation learning (box B) belongs in the model because it is a powerful force in real-world performance. Our learners do a large part of their learning on the job. Whether they receive formal training or not, learners learn through their experiences at work, at play, just living their lives. Performance-situation learning (box B) can support the learning-intervention learning (box A) by reinforcing what was originally learned, taking the learning deeper, determining real-world contingencies, and creating fluency in retrieval, among other things. One thing we need to realize as learning professionals is that we don't necessarily have to get learners up to speed completely in our formal learning interventions—in fact it is difficult to do so. Instead of trying to cram every bit of information into our training programs, we would be far better off to design our programs with an eye to what can be learned on the job and a plan for how to support that later on-the-job learning. From the opposite direction, learning-intervention learning (box A) can be designed specifically to help learners learn in their performance situations. We'll talk more about this later, but briefly, we can help our learners learn by helping them notice cues they might not readily notice, by providing them with relevant mental models of how the world works, and by influencing the performance-situation itself—for example by providing reminding mechanism and getting learners' managers involved. To highlight this point again, box A learning can influence and support box B learning and box B learning can reinforce and extend box A learning. The Learning Landscape Summary All models and all metaphors have limitations, even if they are brilliantly clear in simplifying complex realities into workable conceptual maps (think e=mc2). I've been working on this learning landscape model for years and though it seems complete and potent to me now, I imagine that in the years to come I and others will find chinks in its armor or improvements that can be made. I admit the possibility of the model's limitation partly because it is true (that it is likely to have limitations) and partly to model useful thought processes in learning design. Too often, our instructional-design programs have taught models as gospel, unfortunately creating instructional designers who are only able to follow recipes—and who particularly (a) are unable to be creative when unusual situations confront them, (b) are unable to create new models of increasing or context-specific usefulness, and (c) are unable/unwilling to listen and learn from the wisdom of other people and other disciplines. The learning landscape model is intended to make the following points: Our ultimate goal should be to create beneficial learning outcomes. There are two aspects of learning outcomes—the fulfillment the learner gets from undertaking a learning effort and the learning results the organization gets from investing in learning. For formal learning interventions to produce their benefits, they must ultimately produce appropriate behaviors in some future performance situation (often a workplace on-the-job situation). For formal learning interventions to produce their benefits, they must support the learners in being able to retrieve what they've learned. This means specifically, that our learning interventions have to be designed to minimize forgetting and elicit spontaneous remembering. There are two ways to generate appropriate behaviors, retrieval of previously learned information and triggering of appropriate responding. These working-memory processes can work in concert. Triggering appropriate working-memory responding is an underutilized tool. We need to more aggressively look to utilize performance support tools, reminding mechanisms, and management oversight. Formal learning interventions are not the only means to produce appropriate behaviors. Learners do a lot of their learning in their performance situations. We ought to leverage that learning to reinforce and extend any formal learning that was utilized. We ought to design our formal learning interventions to improve our learners' informal-learning opportunities.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:56pm</span>
In a provocative article today in the New York Times (June 6, 2008), Paul Krugman makes the case that all media (books, music, articles, software, etc.) providers/creators will be forced to lower prices significantly or give away their products for free. The new business model will involve selling ancillary services or products. This will not only produce a profound shift in how the world works, but it will affect the learning-and-performance industry as well. For example, no longer will I be able to earn practically nothing for producing some of the best research-to-practice stuff on the planet. Instead, I'll be able to earn nothing at all for the research reports I produce. No worries, I'm all right with that. I've got my consulting and speaking to support me. But really, all you training providers ought to peer deeply into the future. How will this affect you? Here's some wild ideas: Off-the-shelf e-learning courses will get really good and be sold really cheaply to wide audiences and all the small e-learning shops across the world will collapse into three to five big powerhouses, at least for generic topics like customer-service basics, leadership basics, Microsoft Excel, etc. (maybe just within countries at first because of the need to be culturally appropriate) Companies that sell vast collections of mediocre e-learning are doomed. Custom training vendors will have to focus on learning that is strategic to their clients success and/or tailored. We'll all have to get better talking to the business people. Training will have low profit margins, but training with consulting will have sustainable profit margins. LMS's (and the coming talent-development upgrades) will be given away cheap, but installation, training, and maintenance will be sold for reasonable margins. Moodle will win. Well, Moodle could get other open-source competition. Training departments (or talent-development departments) will be able to purchase some things more cheaply. All the e-learning development tools that proliferate today will disappear or be reduced to niche segmentation as two to five dominant players provide relatively inexpensive yet powerful authoring tools. Think Captivate at $259 in five years, $79 in ten years. Training vendors may actually be forced to prove their value as the market shrinks and competition heats up. Can you say control-group studies done by independent unbiased evaluators. (I'm probably biased in wishing for this). Training vendors may actually be forced to pay attention to the learning research to enable them to actually get better results so they will look good on valid evaluations. (I'm probably biased in wishing for this).  Okay, what do you think?  As for my research/consulting practice, don't worry, I'm already there. I basically make 80% of my money doing consulting, speaking and keynotes, workshops, learning audits, and providing instructional-design help. My research is done for love, because it's the right thing to do, and because it makes me—in my not-so-humble opinion, extraordinarily good at providing consulting, workshops, etc.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:56pm</span>
I've developed three separate several job aids in regards to learning measurement over the last few years, and I decided recently that 1 was enough, so I've integrated the wisdom from all three into one job aid, which I now make available to you for free. This job aid has several advantages: It's inspired by honest-to-goodness learning research. It fits onto one page. It provides a brief rationale for each point. It prompts users to audit their current practices. It prompts users to take action for improvement. It includes contact information for further inquiries. It covers critical measurement-design issues. It's free. Click here to download the job aid now.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:56pm</span>
This Friday online, I'm leading a discussion on measurement best-practices during my Brown Bag Learning event. It's a 30-minute review of the a new job aid I've created to help you do better measurement. As usual, it's research-based!! Click to get the job aid. Click to register (or learn more) about the webinosh.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:56pm</span>
Smile sheets (the feedback forms we give learners after learning events) are an almost inevitable practice for training programs throughout the workplace learning industry. Residing at Donald Kirkpatrick's 1st level—the Reaction level—smile sheets offer some benefits and some difficulties. On the plus side, smile sheets (a) show the learners that we respect their thoughts and concerns, (b) provide us with a customer satisfaction ratings (with the learners as customers), (c) hint at potential bright spots and trouble spots, and (d) enable us to make changes to improve later programs. On the minus side, smile sheets (a) do not seem to correlate with learning or behavior (see meta-analysis by Alliger, Tannenbaum, Bennett, Traver, and Shotland, 1997, that showed very weak correlations), (b) are often biased by being provided to learners in the learning context immediately after learning, and (c) are often analyzed in a manner that over-values the data as more meaningful than it really is. Based on these benefits and difficulties, I recently developed a new smile sheet (one that shows its teeth, so to speak. SMILE) for my workshop Measuring and Creating Learning Transfer. It has several advantages over traditional smile sheets: 1. Instead of asking learners to respond globally (which they are not very good at), it asks learners to respond to specific learning points covered in the learning intervention. This not only enables the learners to better calibrate their responses, it also gives the learners a spaced repetition (improving later memory retrieval on key learning points). 2. The new smile sheet enables me to capture data about the value of the individual key concepts so that changes can be made in future learning interventions. 3. The smile sheet has only a few overall ratings (when lots of separate ratings are used in traditional smile sheets, most of the time we don't even analyze or use the data that is collected). There is space for comments on specifics, which obviates the need for specific ratings, and really gets better data as well. The average value is highlighted, which helps the learners compare the current learning intervention to previous learning interventions they have experienced. (You should be able to click on the image to see a bigger version). 4. The smile sheet asks two critical questions related to how likely the information learned will be utilized on the job and how likely the information will be shared with others. In some sense, this is where the rubber hits the road because it asks whether the training is likely to have an impact where it was intended to have an impact. 5. The smile sheet shows some personal touches that encourage the learners that the learning facilitator (trainer, professor, etc., or me in this case) will take the information seriously. 6. Finally, the smile sheet is just a starting point for getting feedback from learners. They are also sent a follow-up survey 2 weeks later, asking them to respond to a few short questions. Here are a few of those questions. Again, you might need to click the image to see a bigger version. The learners get the following question only if they answer a previous question suggesting that they had not yet shared what they learned with others. Why I Like My New Smile Sheet I'm not going to pretend that I've created the perfect assessment for my one-day workshop. As I've said many times before, I don't believe in "perfect assessments." There are simply too many tradeoffs between precision and workability. Also, my new smile sheet and the follow-up survey are really only an improvement on the traditional smile sheet. So much more can be done, as I will detail below. I like my new evaluation sheet and follow-up survey because they give me actionable information. If my learners tell me that a concept provides little value, I can look for ways to make it valuable and relevant to them, or I can discard it. If my learners find a concept particularly new and valuable, I can reinforce that concept and encourage implementation, or I can highlight this concept in other work that I do (providing value to others). If my learners rate my workshop high at the end of the day, but low after two weeks, I can figure out why and attempt to overcome the obstacles. If my learners think they are likely to implement what they learned (or teach others) at the end of the day, but don't follow-through after two weeks, I can provide more reminders, encourage more management support, provide more practice to boost long-term retrieval, or provide a follow-up learning experience (maybe a working-learning experience). I also like the evaluation practice because it supports learning and performance. It provides a spaced repetition of the key learning concepts at the end of the learning event. It provides a further spaced repetition of the key learning concepts at the beginning of the two-week survey. It reminds learners at the end of the learning intervention that they are expected to put their learning into practice and share what they've learned with others. It reminds them after 2 weeks back on the job that they are expected to put their learning into practice and share what they've learned with others. It provides them with follow-up support 2 weeks out if they feel they need it. Limitations My new smile sheet and follow-up survey don't tell me much about how people are actually using what I've taught them in their work. They could be implementing things perfectly or completely screwing things up. They might perfectly understand the learning points I was making or they may utterly misunderstand them. The workshop is an open-enrollment workshop, so I don't really have access to people on the job. When I run the workshop at a client's site (as opposed to an open-enrollment format), there can be opportunities to actually put things into practice, give feedback, and provide additional information and support. This, by the way, not only improves my learners' remembering and performance (and my clients' benefits), it gives me even richer evaluation information than any smile sheet or survey could.  While the smile sheet and follow-up survey include the key learning points, they don't assess retrieval of those learning points or even understanding. Not everyone will complete the follow-up survey. The design I mentioned not only doesn't track learning, understanding, or retrieval; it also doesn't compare results to anything except learners' subjective expectations. If I was going to measure learning or performance or even organiziational results, I would consider control groups, pretests, etc. There is no benchmarking data with other similar learning programs. I don't know whether my learners are doing better than if they read a book, took a workshop with Ruth Clark, or went and got their masters degree in learning design from Boise State. Bottom line is that my smile sheet and follow-up survey are an improvement over most traditional smile sheets, but they certainly aren't a complete solution. Learning Measurement is a Critical Leverage Point Learning Measurement provides us with a critical leverage point in the work that we do. If we don't do good measurement, we're not getting good feedback. If we don't get good feedback, we're not able to improve what we're doing. My workshop smile sheet and follow-up survey attempt to balance workability and information-gathering. If you find value in this approach, great, and feel free to use the links below to download my smile sheet so you can use it as a template for your evaluation needs. If you have suggestions for improvement, send me an email or leave a comment. Download the Smile Sheet As a PDF (so you can see how it should look). Download SmileSheetJune2008.pdf As a Word 2007 document. Download SmileSheetJune2008.docx As a Word 2003 document. Download smilesheetjune2008word2003.doc References Alliger, G. M., Tannenbaum, S. I., Bennett, W. Jr., Traver, H., & Shotland, A. (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-358.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:56pm</span>
I've been skeptical of claims of so-called brain-based learning for years. Someday perhaps we'll be able to link analysis of brain function to behavior and learning, but we're not there yet. Here is a brilliant YouTube video by Daniel Willingham, professor at the University of Virginia, that does a great job explaining the problems with brain-based learning. His audience seems to be schools and universities, but his arguments hold for work-learning as well.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:55pm</span>
The "Awards" that proliferate in our industry are largely a bunch of hooey. Bill Ellet, Editor of Training Media Review, in a more measured tone than mine, has some prescient thoughts on this.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:55pm</span>
Have any of you been asked to be political as part of your role as a learning professional? Has your training and development apparatus been charged with manipulating employee voting behavior or political action? This story from Slate, about Walmart's attempts to marshal votes for Republicans, got me thinking about this. Let us know if you know anything about training and development departments being utilized to influence elections. What would you do if asked to develop a training initiative to modify your learners' political thinking and action? Would you do it if the training would support your preferred candidate? Would it be ethical if the training was simply designed to encourage voter turnout (realizing of course that voter turnout/registration is usually targeted to push the election one way or another)? Would it be ethical to use the training and development department if the training was truly non-partisan, not favoring either candidate or party?
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:55pm</span>
Here's a nice article, sponsored by Adobe, and written by Allison Rossett and Antonia Chan (2008, June) that provides some very nice descriptions and examples about engaging eLearning design. Check it out.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:54pm</span>
For almost a decade I've been building a model of how learning works to prompt performance. Each iteration gets better (in my unbiased opinion). Here's the latest one--this one has the advantage of pointing out the responsibilities learning professionals have AND the responsibilities that learners' managers and the workplace have in creating on-the-job results. You can use this model for two purposes: As a visual metaphor for how learning works to drive on-the-job performance and results. As a job aid to assign responsibilities and tasks. This graphic draws on many sources, many I'm probably unaware of. It draws from the wisdom of authors such as Wick, Pollock, Jefferson, and Flanagan of Six Disciplines of Breakthrough Learning fame, and Tim Mooney and Rob Brinkerhoff from the new book Courageous Training (which is great by the way, I'll review it within the next month). It also draws from countless researchers on learning, memory, instruction, and cognition who have helped me understand learning at a deep level, enabling me to add to models that don't fully include wisdom on how learning and cognition really work to drive remembering. Also, I'd like to thank my many clients who have enabled me a great real-world workshop in which to think deeply about how learning works in a practical reality. I'd particularly like to thank my friends at Walgreens, and especially Anne Laures who commented on an earlier version of this model. Download Learning-Performance_Diagram_v2.pdf. As always, this is a work in progress, so let me know what you like and what I might be missing. Note, of course, that human learning and performance is too complicated to include every factor of relevance. My goal is to create a model simple enough to be easily understood and precise enough to be useful and provide practical learning-to-performance improvement. Oh, if you have to give it a name, you might call it the Learning-to-Performance Landscape Model, but I'll probably come up with a better name.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:54pm</span>
I've been busy again thinking about the nexus between LEARNING and LEARNING MEASUREMENT. You can peruse some of my previous thoughts on learning measurement by clicking here. Here is a brand new article that I wrote for the eLearning Guild on how to evaluate Learning 2.0 stuff. Note: Learning 2.0 is defined (by the eLearning Guild) as: The idea of learning through digital connections and peer collaboration, enhanced by technologies driving Web 2.0. Users/Learners are empoweredto search, create, and collaborate, in order to fulfill intrinsic needs to learn new information. Evaluating Learning 2.0 differs from evaluating traditional Learning 1.0 training for many reasons, one of which is that Learning 2.0 enables (encourages) learners to create their own content. Steve Wexler, Director of Research and Emerging Technologies at the eLearning Guild, and I are leading a Webinar on Thursday September 4th on the current state of eLearning Measurement. We've got some new data that we're hot to share. Finally, Roy Pollock, one of the authors of the classic book, Six Disciplines of Breakthrough Learning, and I are leading a one-day symposium on measuring learning at the eLearning Guild's DevLearn 2008 conference in November. It's a great chance to go to one of the best eLearning conferences around while working with Roy and I in a fairly intimate workshop, wrangling with the newest thinking in how to measure learning. Choose Symposium S-4. Note that it may not show Roy's information there yet--the Guild is still working on the webpage--but let me assure you that Roy and I are equal partners in this one.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:54pm</span>
Judith Gustafson just left an excellent comment on an earlier blog post. She let us know about a presentation at the Association of (AECT) Educational Communications and Technology conference in 2002. Click here for the PPT presentation by Tony Betrus and Al Januszewski of the State University of New York at Potsdam that does a great job of describing what Edgar Dale meant to convey with his cone, AND shows numerous examples of how the cone has been used improperly with the numbers added. Here is my original post on this.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:54pm</span>
I started Work-Learning Research 10 years ago in August. Since that time, I've been translating learning research into practical recommendations for learning professionals. Unfortunately, it's not easy. It takes a major change initiative in most instructional-development shops. It takes time. It takes leadership. It often takes a helping hand. Why? Because we have to completely change our mindsets. For example, I've recently been using a model I'm calling Situation-Based Learning Design. It is research-based but because it is translated and crafted into a conceptually useful framework, my clients and audiences have found it eye-opening. More importantly, they have been able to see its applicability. BUT, even though the ideas in the model are easy, it is extremely difficult to move a whole work team to the new method. It takes time, perseverance, and guidance. We all fall back on our topic-based learning-design mental models. To develop new mental models aligned with the research is a worthwhile slog, but it is a slog nonetheless. Recently I've been designing workshops around the Situation-Based Learning Design notion. My clients see me present the concept at a conference and want a workshop in their own company. Nothing I have done in my ten years as President of Work-Learning Research has been so satisfying. I've learned a few things over the years, correcting mistakes in my delivery. SMILE. One reason that the Situation-Based Learning Design is having such an impact now is that it's a simple research-based model that immediately makes sense to people. The other reason for impact is that we're able to build workshops that enable people to begin changing the way they do learning design. Finally, more and more learning professionals understand that for training (even their own training) to be effective, it has to be designed more like a change initiative than a course. 
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:54pm</span>
Politics yuck, politics tricks, politics risk, politics fix, politics rules, politics is. Truth is, politics is the hand-to-hand application of anecdotal and scientific wisdom on human learning and cognition. Here in the United States, we are in the middle of an exciting and critical Presidential election campaign. I love observing politics because I find it intriguing from a learning-and-cognition standpoint. Here are some things we (as learning professionals) can learn from the political wizards. Repetition is worth repeating. Space your repetitions over time. Have powerful messengers repeat the key messages. Authentic messengers are listened to longer and with more engagement. Messengers who lose credibility (or integrity) are doomed. Prioritize your messages. Brand your messages into a potent theme. Vary the delivery of your messages, but stay consistent in the underlying message and theme. Learning messages that are aligned with on-the-ground realities are the most powerful. It is only the very rarest of incumbents who can overcome a bad economy. It is only the rarest of learning messages that can overcome irrelevance or everyday business distractions. When your efforts or credibility are attacked, fight back hard and fast. When candidates are attacked, they attack back, disputing the assertions. If your training efforts are impugned or criticized anywhere in your company, go on the offensive. Dispute the claims immediately and publicly. Let people know public criticism of your efforts will be met with vigorous rebuttals. Pull the criticizer aside privately and ask them not to continue their claims. Explain your realities. Educate them about the learning enterprise. Send out communications to key stakeholders disputing the claims, if not directly then indirectly by highlighting successes. After stopping the bleeding, listen to the complaints to see if there is truth in them. Fix the problems as soon as you can. Go back to the complainers and tell them how you fixed the problems. Ask the complainers for their support and ideas going forward. Remind them of your need for resources, support, etc. Help them solve their business problems. If you do get public complaints, see those as a warning sign that you are screwing up big time. Reach out and get better feedback on how you're doing and how you're doing politically. Build better feedback into your learning measurements and designs. Remember, if you're a leader of the learning enterprise in your company, you have a responsibility to ensure that the learning-and-performance efforts will work. If your training efforts have a bad reputation, the learning will never get the support it needs to move from learning to application and you'll never get the resources you need to get real results. Bottom line: Embrace politics; it's only human.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 15, 2015 02:54pm</span>
Displaying 37873 - 37896 of 43689 total records