Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

Sorry for the lack of posts this week; Monday was shot while I migrated my old machine to a new one (yay)!  Tuesday was shot with catching up. Wed was shot with lost internet, and trying to migrate the lad to my old machine.  So today I realize I haven’t posted all week (though you got extra from me last week ;)!  So here’s one reflection on the conference last week. First, if you haven’t seen it, you should check out the debate I had with the good Dr. Will Thalheimer over at his blog about the Kirkpatrick model.  He’s upset with it as it’s not permeated by learning, and I argue that it’s role is impact, not learning design (see my diagram at the end).  Great comments, too! We’ll be doing a hangout on it on Friday the 3rd of April. The other interesting thing that happened is on the first day I was cornered three times for deep conversations on measurement. This is a good thing, mostly, but one in particular was worth a review.  The discussion for this last centered on whether measurement was needed for most initiatives, and I argued yes, but with a caveat. There was an implicit thought that for many things that measurement wasn’t needed. In particular, for informal learning when we’ve got folks successfully developed as effective self-learners and a good culture, we don’t need to measure. And I agree, though we might want to track (via something like the xAPI) to see what things are effective or not. However, I did still think that any formal interventions, whether courses, performance support, or even specific social initiatives should be measured. First, how are you going to tune it to get it right? Second, don’t you want to attach the outcome to the intervention? I mean, if you’re doing performance consulting, there should be a gap you’re trying to address or why are you bothering?  If there is a gap, you have a natural metric. I am pleased to see the interest in measurement, and I hope we can start getting some conceptual clarity, some good case studies, and really help make our learning initiatives into strategic contributions to the organization.  Right?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:09am</span>
In the Debunker Club, a couple of folks went off on the 70:20:10 model, and it prompted some thoughts.  I thought I’d share them. If you’re not familiar with 70:20:10, it’s a framework for thinking about workplace learning that suggests we need to recognize that the opportunity is about much more than courses. If you ask people how they learned the things they know to do in the workplace, the responses suggest that somewhere around 10% came from formal learning, 20% from informal coaching and such, and about 70% from trial and error.  Note the emphasis on the fact that these numbers aren’t exact, it’s just an indication (though considerable evidence suggests that the contribution of formal learning is somewhere between 5 and 20%, with evidence from a variety of sources). Now, some people complain that the numbers can’t be right, no one gets perfect 10 measurements. To be fair, they’ve been fighting against the perversion of Dale’s Cone, where someone added numbers on that were bogus but have permeated learning for decades and can’t seem to be exterminated. It’s like zombies!  So I suspect they’re overly sensitive to whole numbers. And I like the model!  I’ve used it to frame some of my work, using it as a framework to think about what else we can do to support performance. Coaching and mentoring, facilitating social interaction, providing challenge goals, supporting reflection, etc.  And again to justify accelerated organizational outcomes. The retort I hear is that "it’s not about the numbers", and I agree.  It’s just a tool to help shake people out of the thought that a course is the only solution to all needs.  And, outside the learning community, people get it.  I have heard that, over presentations to hundreds of audiences of executives and managers, they all recognize that the contributions to their success came largely from sources other than courses. However, if it’s not about the numbers, maybe calling it the 70:20:10 model may be a problem.  I really like Jane Hart’s diagram about Modern Workplace Learning as another way to look at it, though I really want to go beyond learning too.  Performance support may achieve outcomes in ways that don’t require or deliver any learning, and that’s okay. There’re times when it’s better to have knowledge in the head than in the world. So, I like the 70:20:10 framework, but recognize that the label may be a barrier. I’m just looking for any tools I can use to help people start thinking ‘outside the course’.  I welcome suggestions!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:09am</span>
Week before last, Will Thalheimer and I had another one of our ‘debates’, this time on the Kirkpatrick model (read the comments, too!).  We followed up last week with a live debate.  And in the course of it I said something that I want to reiterate and extend. The reason I like the Kirkpatrick model is it emphasizes one thing that I see the industry failing to do.  Properly applied (see below), it starts with the measurable change you need to see in the organization, and you work backwards from there. You go back to the behavior change you need in the workplace to address that measure, and from there to the changes in training and/or resources to create that behavior change.  The important point is starting with a business metric.  No ‘we need a course on this’, but instead: "what business goal are we trying to impact". Note: the solution can just be a tool, it doesn’t have to always be learning.  For example, if what people need to access accurately are the specific product features of one of a multitude of solutions that are in rapid flux (financial packages, electronic hardware, …), trying to get it in the head accurately isn’t a good goal. Having people able to access the information ‘in the head’ is an exercise in futility, and you’re better off putting the information ‘in the world’.  (Which is why I want to change from Learning & Development to Performance & Development, it’s not about learning, it’s about doing!) The problems with Kirkpatrick are several.  For one, even he admitted he numbered it wrong.  The starting point is numbered ‘four’, which misleads people.  So we get the phenomena that people do stage 1, sometimes stage 2, rarely do they get to stage 3, and stage 4 is almost non-existent, according to ATD research.  And stage 1, as Will rightly points out, is essentially worthless, because the correlation between what learners think of the learning and the actual impact is essentially zero!  Finally, too often Kirkpatrick is wrongly considered as only to evaluate training (even the language on the site, as the link above will show you, talks only about training). It should be about the impact of an intervention whatever the means (see above).  And the impact is what the Kirkpatrick model properly is about, as I opined in the blog debate. So, in the live debate, I said I’d be happy for any other model that focused on working backwards. And was reminded that, well, I proposed just that a while ago!  The blog post is the short version, but I also wrote this rather longer and more rigorous paper (PDF), and I’m inclined think it’s one of my more important contributions to design (to date ;). It’s a fairly thorough look at the design process and where we go wrong (owing to our cognitive architecture), and a proposal for an alternative approach based upon sound principles.   I welcome your thoughts!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:08am</span>
Last week on the #chat2lrn twitter chat, the topic was microlearning. It was apparently prompted by this post by Tom Spiglanin which does a pretty good job of defining it, but some conceptual confusion showed up in the chat that makes it clear there’s some work to be done.  I reckon there may be a role for the label and even the concept, but I wanted to take a stab at what it is and isn’t, at least on principle. So the big point to me is the word ‘learning’.  A number of people opined about accessing a how-to video, and let’s be clear: learning doesn’t have to come from that.   You could follow the steps and get the job done and yet need to access it again if you ever needed it. Just like I can look up the specs on the resolution of my computer screen, use that information, but have to look it up again next time.  So it could be just performance support, and that’s a good thing, but it’s not learning.  It suits the notion of micro content, but again, it’s about getting the job done, not developing new skills. Another interpretation was little bits of components of learning (examples, practice) delivered over time. That is learning, but it’s not microlearning. It’s distributed learning, but the overall learning experience is macro (and much more effective than the massed, event, model).  Again, a good thing, but not (to me) microlearning.  This is what Will Thalheimer calls subscription learning. So, then, if these aren’t microlearning, what is?  To me, microlearning has to be a small but complete learning experience, and this is non-trivial.  To be a full learning experience, this requires a model, examples, and practice.  This could work with very small learnings (I use an example of media roles in my mobile design workshops).  I think there’s a better model, however. To explain, let me digress. When we create formal learning, we typically take learners away from their workplace (physically or virtually), and then create contextualized practice. That is, we may present concepts and examples (pre- via blended, ideally, or less effectively in the learning event), and then we create practice scenarios. This is hard work. Another alternative is more efficient. Here, we layer the learning on top of the work learners are already doing.  Now, why isn’t this performance support? Because we’re not just helping them get the job done, we’re explicitly turning this into a learning event by not only scaffolding the performance, but layering on a minimal amount of conceptual material that links what they’re doing to a model. We (should) do this in examples and feedback on practice, now we can do it around real work. We can because (via mobile or instrumented systems) we know where they are and what they’re doing, and we can build content to do this.  It’s always been a promise of performance support systems that they could do learning on top of helping the outcome, but it’s as yet seldom seen. And the focus on minimalism is good, too.  We overwrite and overproduce, adding in lots that’s not essential.  C.f. Carroll’s Nurnberg Funnel or Moore’s Action Mapping.  And even for non-mobile, minimalism makes sense (as I tout under the banner of the Least Assistance Principle).  That is, it’s really not rude to ask people (or yourself as a designer) "what’s the least I can do for you?"  Because that’s what people generally really prefer: give me the answer and let me get back to work! Microlearning as a phrase has probably become current (he says, cynically) because elearning providers are touting it to sell the ability of their tools to now deliver to mobile.   But it can also be a watch word to emphasize thinking about performance support, learning ‘in context’, and minimalism.  So I think we may want to continue to use it, but I suggest it’s worthwhile to be very clear what we mean by it. It’s not courses on a phone (mobile elearning), and it’s not spaced out learning, it’s small but useful full learning experiences that can fit by size of objective or context ‘in the moment’.  At least, that’s my take; what’s yours?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:08am</span>
I’m writing a chapter about mobile trends, and one of the things I’m concluding with are the different ways we need to think to take advantage of mobile. The first one emerged as I wrote and kind of surprised me, but I think there’s merit. The notion is one I’ve talked about before, about how what our brains do well, and what mobile devices do well, are complementary. That is, our brains are powerful pattern matchers, but have a hard time remembering rote information, particularly arbitrary or complicated details.  Digital technology is the exact opposite. So, that complementation whenever or wherever we are is quite valuable. Consider chess.  When first computers played against humans,  they didn’t do well.  As computers became more powerful, however, they finally beat the world champion. However, they didn’t do it like humans do, they did it by very different means; they couldn’t evaluate well, but they could calculate much deeper in the amount of turns played and use simple heuristics to determine whether those were good plays.  The sheer computational ability eventually trumped the familiar pattern approach.  Now, however, they have a new type of competition, where a person and a computer will team and play against another similar team. The interesting result is not the best chess player, nor the best computer program, but a player who knows best how to leverage a chess companion. Now map this to mobile: we want to design the best complement for our cognition. We want to end up having the best cyborg synergy, where our solution does the best job of leaving to the system what it does well, and leaving to the person the things we do well. It’s maybe only a slight shift in perspective, but it is a different view than designing to be, say, easy to use. The point is to have the best partnership available. This isn’t just true for mobile, of course, it should be the goal of all digital design.  The specific capability of mobile, using sensors to do things because of when and where we are, though, adds unique opportunities, and that has to figure into thinking as well.  As does, of course, a focus on minimalism, and thinking about content in a new way: not as a medium for presentation, but as a medium for augmentation: to complement the world, not subsume it. It’s my thinking that this focus on augmenting our cognition and our context with content that’s complementary is the way to optimize the uses of mobile. What’s your thinking?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:08am</span>
Several events are coming up that I should mention ("coming to a location near you!"): If you’re anywhere near Austin, you should check out the upcoming eLearning Symposium May 7 and 8. I’m speaking on the L&D Revolution I’m trying to incite, and then offering a half day workshop to help you get your strategy going.  There’s a nice slate of other speakers to help you dig deeper into elearning. I’ll also be speaking on Serious eLearning at Callidus Cloud Connections in Las Vegas May 11-13.  If you’re into Litmos, or thinking about it, it’s the place to be. If you’re near Atlanta, I’ll be busting learning myths in an evening session for the ATD Chapter on the 2nd of June, and then running a learning game workshop on the 3rd.  You’ll find out more about learning and engagement; you can and should add game elements to your learning design.  I’m serious when I say that "learning can, and should, be hard fun". And I’ll be touting the needed L&D Revolution up in Vancouver June 11, keynoting the CSTD Symposium.  There’s a great line up of talks to raise your game. I would love to meet you at one of these events; hope to see you there (or there, or there, or there).
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:08am</span>
In the industrial age, you really didn’t need to understand why you were doing what you were doing, you were just supposed to do it.  At the management level, you supervised behavior, but you didn’t really set strategy. It was only at the top level where you used the basic principles of business to run your organization.  That was then, this is now. Things are moving faster, competitors are able to counter your advances in months, there’s more information, and this isn’t decreasing.  You really need to be more agile to deal with uncertainty, and you need to continually innovate.   And I want to suggest that this advantage comes from having a conceptual understanding, a model of what’s happening. There are responses we can train, specific ways of acting in context.  These aren’t what are most valuable any more.  Experts, with vast experience responding in different situations, abstract models that guide what they do, consciously or unconsciously (this latter is a problem, as it makes it harder to get at; experts can’t tell you 70% of what they actually do!).  Most people, however, are in the novice to practitioner range, and they’re not necessarily ready to adapt to changes, unless we prepare them. What gives us the ability to react are having models that explain the underlying causal relations as we best understand them, and then support in applying those models in different contexts.  If we have models, and see how those models guide performance in context A, then B, and then we practice applying it in context C and D (with model-based feedback), we gradually develop a more flexible ability to respond. It’s not subconscious, like experts, but we can figure it out. So, for instance, if we have the rationale behind a sales process, how it connects to the customer’s mental needs and the current status, we can adapt it to different customers.  If we understand the mechanisms of medical contamination, we can adapt to new vectors.  If we understand the structure of a cyber system, we can anticipate security threats. The point is that making inferences on models is a more powerful basis than trying to adapt a rote procedure without knowing the basis. I recognize that I talk a lot in concepts, e.g. these blog posts and diagrams, but there’s a principled reason: I’m trying to give you a flexible basis, models, to apply to your own situation.  That’s what I do in my own thinking, and it’s what I apply in my consulting.  I am a collector of models, so that I have more tools to apply to solving my own or other’s problems.   (BTW, I use concept and model relatively interchangeably, if that helps clarify anything.) It’s also a sound basis for innovation.  Two related models (ahem) of creativity say that new ideas are either the combination of two different models or an evolution of an existing one.  Our brains are pattern matchers, and the more we observe a pattern, the more likely it will remind us of something, a model. The more models we have to match, the more likely we are to find one that maps. Or one that activates another. Consequently, it’s also one  of the things I push as a key improvement to learning design. In addition to meaningful practice, give the concept behind it, the why, in the form of a model. I encourage you to look for the models behind what you do, the models in what your presented, and the models in what your learners are asked to do. It’s a good basis for design, for problem-solving, and for learning.  That, to me, is a big opportunity.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:07am</span>
A conversation with a colleague prompted a reflection.  The topic was personal learning, and in looking for my intersections (beyond my love of meta-learning), I looked at my books. The Revolution isn’t an obvious match, nor is games (though trust me, I could make them work ;), but a more obvious match was mlearning. So the question is, how do we do personal knowledge mastery with mobile? Let’s get the obvious out of the way. Most of what you do on the desktop, particularly social networking, is doable on a mobile device.  And you can use search engines and reference tools just the same. You can find how to videos as well. Is there more? First, of course, are all the things to make yourself more ‘effective’.  Using the four key original apps on the Palm Pilot for instance: your calendar to remind you of events or to check availability, using ToDo checklists to remember commitments to do something, using memos to take notes for reference, and using your contact list to reach people.  Which isn’t really learning, but it’s valuable to learn to be good at these. Then we start doing things because of where you are.  Navigation to somewhere or finding what’s around you are the obvious choices. Those are things you won’t necessarily learn from, but they make you more effective.  But they can also help educate you. You can look where you are on a map and see what’s around you, or identify the thing on the map that’s in that direction ("oh, that’s the Quinnsitute" or "There’s Mount Clark" or whatever), and have a chance of identifying a seen prominence. And you can use those social media tools as before, but you can also use them because of where or when you are. You can snap pictures of something and send it around and ask how it could help you. Of course, you can snap pictures or films for later recollection and reflection, and contribute them to a blog post for reflection.  And take notes by text or audio. Or even sketching or diagramming. The notes people take for themselves at conferences, for instance, get shared and are valuable not just for the sharer, but for all attendees. Certainly searching things you don’t understand or, when there’s unknown language, seeing if you can get a translation, are also options.  You can learn what something means, and avoid making mistakes. When you are, e.g. based upon what you’re doing, is a little less developed.  You’d have to have rich tagging around your calendar to signal what it is you’re doing for a system to be able to leverage that information, but I reckon we can get there if and when we want. I’m not a big fan of ‘learning’ on a mobile device, maybe a tablet in transit or something, but not courses on a phone.  On the other hand, I am a big fan of self-learning on a phone, using your phone to make you smarter. These are embryonic thoughts, so I welcome feedback.   Being more contextually aware both in the moment and over time is a worthwhile opportunity, one we can and should look to advance.  I think there’s  much yet, though tools like ARIS are going to help change that. And that’ll be good.  
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:07am</span>
Why should you, as a learning designer, take a game design workshop?  What is the relationship between games and learning?  I want to suggest that there are very important reasons why you should. Just so you don’t think I’m the only one saying it, in the decade since I wrote the book Engaging Learning: Designing e-Learning Simulation Games, there have been a large variety of books on the topic. Clark Aldrich has written three, at least count. James Paul Gee has pointed out how the semantic features of games match to the way our brains learn, as has David  Williamson Shaeffer.  People like Kurt Squire, Constance Steinkuhler, Henry Jenkins, and Sasha Barab have been strong advocates of games for learning. And of course Karl Kapp has a recent book on the topic.  You could also argue that Raph Koster’s A Theory of Fun is another vote given that his premise is that fun is learning. So I’m not alone in this. But more specifically, why get steeped in it?  And I want to give you three reasons: understanding engagement, understanding practice, and understanding design.  Not to say you don’t know these, but I’ll suggest that there are depths which you’re not yet incorporating into your learning, and  you could and should.  After all, learning should be ‘hard fun’. The difference between a simulation and a game is pretty straightforward.  A simulation is just a model of the world, and it can be in any legal state and be taken to any other.  A self-motivated and effective self-learner can use that to discover what they need to know.  But for specific learning purposes, we put that simulation into an initial state, and ask the learner to take it to a goal state, and we’ve chosen those so that they can’t do it until they understand the relationships we want them to understand. That’s what I call a scenario, and we typically wrap a story around it to motivate the goal.  We can tune that into a game.  Yes, we turn it into a game, but by tuning. And that’s the important point about engagement. We can’t call it game; only our players can tell us whether it’s a game or not. To achieve that goal, we have to understand what motivates our learners, what they care about, and figure out how to integrate that into the learning.  It’s about not designing a learning event, but designing a learning experience.  And, by studying how games achieve that, we can learn how to take our learning from mundane to meaningful.   Whether or not we have the resources and desire to build actual games, we can learn valuable lesssons to apply to any of our learning design. It’s the emotional element most ID leaves behind. I also maintain that, next to mentored live practice, games are the best thing going (and individual mentoring doesn’t scale well, and live practice can be expensive both to develop but particularly when mistakes are made).  Games build upon that by providing deep practice; embedding important decisions in a context that makes the experience as meaningful as when it really counts.  We use game techniques to heighten and deep the experience, which makes it closer to live practice, reducing transfer distance. And we can provide repeated practice.  Again, even if we’re not able to implement full game engines, there are many important lessons to take to designing other learning experiences: how to design better multiple choice questions, the value of branching scenarios, and more.  Practical improvements that will increase engagement and increase outcomes. Finally, game designers use design processes that have a lot to offer to formal learning design. Their practices in terms of information collection (analysis), prototyping and refinement, and evaluation are advanced by the simple requirement that their output is such that people will actually pay for the experience.  There are valuable elements that can be transferred to learning design even if you aren’t expecting to have an outcome so valuable you can charge for it. As professionals, it behooves us to look to other fields with implications that could influence and improve our outcomes. Interface design, graphic design, software engineering, and more are all relevant areas to explore. So is game design, and arguably the most relevant one we can. So, if you’re interested in tapping into this, I encourage you to consider the game design workshop I’ll be running for the ATD Atlanta chapter on the 3rd of June. Their price is fair even if you’re not a chapter member, and it’s great deal if you are.  Further, it’s a tried and tested format that’s been well received since I first started offering it. The night before, I’ll be busting myths at the chapter meeting.  I hope I’ll see you there!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:07am</span>
I’ve been working on a learning design that integrates developing social media skills with developing specific competencies, aligned with real work.  It’s an interesting integration, and I drafted a pedagogy that I believe accomplishes the task.  It draws heavily on the notion of activity-based learning.  For your consideration. The learning process is broken up into a series of activities. Each activity starts with giving the learning teams a deliverable they have to create, with a deadline an appropriate distance out.  There are criteria they have to meet, and the challenge is chosen such that it’s within their reach, but out of their grasp.  That is, they’ll have to learn some things to accomplish it. As they work on the deliverable, they’re supported. They may have resources available to review, ideally curated (and, across the curricula, their responsibility for curating their own resources is developed as part of handing off the responsibility for learning to learn).  There may be people available for questions, and they’re also being actively watched and coached (less as they go on). Now, ideally the goal would be a real deliverable that would achieve an impact on the organization.  That, however, takes a fair bit of support to make it a worthwhile investment. Depending on the ability of the learners, you may start with challenges that are like but not necessarily real challenges, such as evaluating a case study or working on a simulation.  The costs of mentoring go up as the consequences of the action, but so do the benefits, so it’s likely that the curriculum will similarly get closer to live tasks as it progresses. At the deadline, the deliverables are shared for peer review, presumably with other teams. In this instance, there is a deliberate intention to have more than one team, as part of the development of the social capabilities. Reviewing others’ work, initially with evaluation heuristics, is part of internalizing the monitoring criteria, on the path to becoming a self-monitoring and self-improving learner. Similarly, the freedom to share work for evaluation is a valuable move on the path to a learning culture.  Expert review will follow, to finalize the learning outcomes. The intent is also that the conversations and collaborations be happening in a social media platform. This is part of helping the teams (and the organization) acquire social media competencies.  Sharing, working together, accessing resources, etc. are being used in the platform just as they are used for work. At the end, at least, they are being used for work! This has emerged as a design that develops both specific work competencies and social competencies in an integrated way.  Of course, the proof is when there’s a chance to run it, but in the spirit of working out loud…your thoughts welcome.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:06am</span>
In a recent debate with my colleague on the Kirkpatrick model, our host/referee asked me whether I’d push back on a request for a course. Being cheeky, I said yes, but of course I know it’s harder than that.  And I’ve been mulling the question, and trying to think of a perhaps more pragmatic (and diplomatic ;) approach.  So here’s a cut at it. The goal is not to stay with just ‘yes’, but to followup.  The technique is to drill in for more information under the guise of ensuring you’re making the right course. Of course, really you’re trying to determine whether there really is a need for a course at all, or maybe a job aid or checklist instead will do, and if so what’s critical to success.  To do this, you need to ask some pointed questions with the demeanor of being professional and helpful. You might, then, ask something like "what’s the problem you’re trying to solve" or "what will the folks taking this course be able to do that they’re not doing now".  The point is to start focusing on the real performance gap that you’re addressing (and unmasking if they don’t really know).  You want to keep away from the information that they think needs to be in the head, and focus in on what decisions people can make that they can’t make now. Experts can’t tell you what they actually do, or at least about 70% of it, so you need to drill in more about behaviors, but at this point you’re really trying to find out what’s not happening that should be.  You can use the excuse that "I just want to make sure we do the right course" if there’s some push back on your inquiries, and you may also have to stand up for your requirements on the basis that you have expertise in your area and they have to respect that just as you respect their expertise in their area (c.f. Jon Aleckson’s MindMeld).  If what you discover does end up being about information, you might ask about "how fast will this information be changing", and "how much of this will be critical to making better decisions".  It’s hard to get information into the head, and it’s a futile effort if it’ll be out of date soon and it’s an expensive one if it’s large amounts and arbitrary. It’s also easy to think that information will be helpful (and the nice-to-know as well as the must), but really you should be looking to put information in the world if you can. There are times when it has to be in the head, but not as often as your stakeholders and SMEs think.  Focus on what people will do differently. You also want to ask "how will we know the course is working".  You can ask about what change would be observed, and should talk about how you will measure it.  Again, there could be pushback, but you need to be prepared to stick to your guns.  If it isn’t going to lead to some measurable delta, they haven’t really thought it through.  You can help them here, doing some business consulting on ROI for them. And here’s it’s not a guise, you really are being helpful. So I think the answer can be ‘yes’, but that’s not the end of the conversation. And this is the path to start demonstrating that you are about business.  This may be the path that starts getting your contribution to the organization to start being strategic. You’ll have to start being about more than efficiency metrics (cost/seat/hour; "may as well weigh ’em") and about how you’re actually impacting the business. And that’s a good thing.  Viva la Revolucion!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:06am</span>
One of the mantras of the Learning Organization is that there should be experimentation.  This has also become, of course, a mantra of the Revolution as well.  So the question becomes, what sort of experiments should we be considering? First, for reasons both pragmatic and principled, these are more likely to be small experiments than large.  On principled reasons, even large changes are probably better off implemented as small steps. On pragmatic reasons, small changes can be built upon or abandoned as outcomes warrant.  These small changes have colloquially been labeled ‘trojan mice‘, a cute way to capture the notion of change via small incursions. The open question, then, is what sort of trojan mice might be helpful in advancing the revolution?  We might think of them in each of the areas of change: formal, performance support, social, culture, etc.  What are some ideas? In formal, we might, for one, push back on taking orders.  For instance, we might start asking about measures that any initiatives will be intended to address. We could also look to implementing some of the Serious eLearning Manifesto ideas. Small steps to better learning design. For performance support, one of the first small steps might be to even do performance support, if you aren’t already. If you are, maybe look to broadening the media you use (experiment with a video, an annotated sequence of pictures, or an ebook).  Or maybe try creating a portal that is user-focused, not business-silo structured. In the social area, you might first have to pilot an exterior social network if there isn’t one. If there is, you might start hosting activities within it.  A ‘share your learning lunch’ might be a fun way to talk about things, and bring out meta-learning.   Certainly, you could start instituting the use within L&D. And with culture, you might start encouraging people to share how they work; what resources they use.  Maybe film the top performers in a group giving a minute or two talk on how they do what they do.  It’d be great if you could get some of the leadership to start sharing, and maybe do a survey of what your culture actually is. The list goes on: in tech you might try some microlearning, a mobile experiment, or considering a content model (ok, not actually build one, that’s a big step ;).  In strategy, you might start gathering data about what the overall organization goals are, or what initiatives in infrastructure have been taken elsewhere in the org or are being contemplated. The point is to start taking some small steps.  So, I’m curious, what small steps have you tried, or what ones might you think of and suggest?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:06am</span>
So, I was continuing the campaign for the Revolution, and wanted to expand the audience interaction. I could’ve used the tired ‘turn to your neighbor’ technique, but I had a thought (dangerous, that).  Could it be improved upon? As I may have mentioned, there has been a backlash against ‘brainstorming’. For example, the New York Times had an article about how it didn’t work, saying that if you bring people into a room, and then give them a problem or topic, and then get them to discuss, it won’t work. And they’re right!  Because that is a broken model of brainstorming; it’s a straw man argument. A real model of brainstorming has the individuals thinking about the problem individually beforehand, before you bring them together. When you have them not have a chance to think independently, the first person to speak colors the thoughts of the others, but if people can come up with their own ideas first, then share and improve, it works well.  The room is smarter than the smartest person in the room, as the quote has it, but the caveat is that you have to manage the process right. So how does this relate to the ‘turn to your neighbor’?  It occurred to me that a clear implication was that if you thought to yourself first, before sharing, you’d get a better outcome. And so that’s what I did: I had them think for themselves on the question I presented, then share, and then stop. Now, to be fair, I didn’t have time to ask for all the output, instead I asked who had come up with ‘formal’ for a question on what supports optimal execution, and who came up with facilitating the flow of information as a solution for supporting innovation. So we have practical limits on what we can do with a large audience and a small amount of time.  However, I did ask at the end of the first one whether they thought it worthwhile. And I asked again of a subset of the audience who attended the next day workshop ("Clark Quinn’s workshop on Strategic Elearning is awesome" was a comment, &lt;fist pump&gt;) what they thought. Overall the feedback was that it was an improvement. Certainly the outputs should be better.  One was "energized". The overall take of the large audience and the smaller one was very positive.  It doesn’t take much longer, because it’s easy to do the quick thinking bit (and it’s no easier to get them to stop sharing :), but it’s a lesson and an improved technique all in one! So, now you know that if you see anyone doing just the ‘turn to your neighbor’, they’re not up on the latest research.  Wonder if we can get this to spread?  But continue exploration is a necessary element to improvement, and innovations happen through diligent work and refinement.  Please do try it out and let me know how it goes!  And, of course, even just your thoughts.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:06am</span>
David McCandless gave a graphically and conceptually insightful talk on the power of visualization at the Callidus Cloud Connections.  He demonstrated the power of insight by tapping into the power of our pattern matching cognitive architecture.   
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:05am</span>
Is there an appetite for change in L&D? That was the conversation I’ve had with colleagues lately. And I have to say that that the answer is mixed, at best. The consensus is that most of L&D is comfortably numb. That L&D folks are barely coping with getting courses out on a rapid schedule and running training events because that’s what’s expected and known. There really isn’t any burning desire for change, or willingness to move even if there is. This is a problem. As one commented: "When I work with others (managers etc) they realise they don’t actually need L&D any more". And that’s increasingly true: with tools to do narrated slides, screencasts, and videos in the hands of everyone, there’s little need to have the same old ordinary courses coming from L&D. People can create or access portals to share created and curated resources, and social networks to interact with one another. L&D will become just a part of HR, addressing the requirements - onboarding and compliance - everything else will be self-serve. The sad part of this is the promise of what L&D could be doing. If L&D started facilitating learning, not controlling it, things could go better. If L&D realized it was about supporting the broad spectrum of learning, including self-learning, and social learning, and research and problem-solving and trouble-shooting and design and all the other situations where you don’t know the answer when you start, the possibilities are huge. L&D could be responsible for optimizing execution of the things they know people need to do, but with a broader perspective that includes putting knowledge into the world when possible. And L&D could be also optimizing the ability of the organization to continually innovate. It is this possibility that keeps me going. There’s the brilliant world where the people who understand learning combine with the people who know technology and work together to enable organizations to flourish. That’s the world I want to live in, and as Alan Kay famously said: "the best way to predict the future is to invent it." Can we, please?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:05am</span>
One of the themes I’ve been strumming in presentations is one where we complement what we do well with tools that do well the things we don’t. A colleague reminded me that JCR Licklider wrote of this decades ago (and I’ve similarly followed the premise from the writings of Vannevar Bush, Doug Engelbart, and Don Norman, among others). We’re already seeing this.   Chess has changed from people playing people, thru people playing computers and computers playing computers, to computer-human pairs playing other computer-human pairs. The best competitors aren’t the best chess players or the best programs, but the best pairs, that is the player and computer that best know how to work together. The implications are to stop trying to put everything in the head, and start designing systems that complement us in ways that assure that the combination is the optimized solution to the problem being confronted. Working backwards [], we should decide what portion should be handled by the computer, and what by the person (or team), and then design the resources and then training the humans to use the resources in context to achieve the goals. Of course, this is only in the case of known problems, the ‘optimal execution’ phase of organizational learning. We similarly want to have the right complements to support the ‘continual innovation’ phase as well. What that means is that we have to be providing tools for people to communicate, collaborate, create representations, access and analyze data, and more. We need to support ways for people to draw upon and contribute to their communities of practice from their work teams. We need to facilitate the formation of work teams, and make sure that this process of interaction is provided with just the right amount of friction. Just like a tire, interaction requires friction. Too little and you go skidding out of control. Too much, and you impede progress. People need to interact constructively to get the best outcomes. Much is known about productive interaction, though little enough seems to make it’s way into practice. Our design approaches need to cover the complete ecosystem, everything from courses and resources to tools and playgrounds. And it starts by looking at distributed cognition, recognizing that thinking isn’t done just in the head, but in the world, across people and tools. Let’s get out and start playing instead of staying in old trenches.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:05am</span>
At a recent meeting, one of my colleagues mentioned that increasingly people weren’t throwing away prototypes.  Which prompted reflection, since I have been a staunch advocate for revolutionary prototyping (and here I’m not talking about "the" Revolution ;). When I used to teach user-centered design, the tools for creating interfaces were complex. The mantras were test early, test often, and I advocated Double Double P’s (Postpone Programming, Prefer Paper; an idea I first grabbed from Rob Phillips then at Curtin).  The reason was that if you started building too early in the design phase, you’d have too much invested to throw things away if they weren’t working. These days, with agile programming, we see sprints producing working code, which then gets elaborated in subsequent sprints.  And the tools make it fairly easy to work at a high level, so it doesn’t take too much effort to produce something. So maybe we can make things that we can throw out if they’re wrong. Ok, confession time, I have to say that I don’t quite see how this maps to elearning.  We have sprints, but how do you have a workable learning experience and then elaborate it?  On the other hand, I know Michael Allen’s doing it with SAM and Megan Torrance just had an article on it, but I’m not clear whether they’re talking storyboard, and then coded prototype, or… Now that I think about it, I think it’d be good to document the core practice mechanic, and perhaps the core animation, and maybe the spread of examples.  I’m big on interim representations, and perhaps we’re talking the same thing. And if not, well, please educate me! I guess the point is that I’m still keen on being willing to change course if we’ve somehow gotten it wrong.  Small representations is good, and increasing fidelity is fine, and so I suppose it’s okay if we don’t throw out prototypes often as long as we do when we need to.  Am I making sense, or what am I missing?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:05am</span>
A colleague was describing his journey, and attributed much of his success (rightly) to his core skills: including his creativity. I was resonating with his list until I got to ‘attention to detail’, and it got me to thinking. Attention to detail is good, right?  We want people to sweat the nuances, and I certainly am inspired by folks who do that. But there are times when I don’t want to be responsible for the details. To be sure, these are times when it doesn’t make sense to have me do the details. For example, once I’ve helped a client work out a strategy, the implementation really largely should be on them, and I might take some spot reviews (far better than just helping them start and abandoning them). So I wondered about what the alternative would be. Now the obvious thought is lack of attention to detail, which might initially be negative, but could there be a positive connotation?  What came to me was attention to connections. That is, seeing how what’s being considered might map to a particular conceptual model, or a related field. Seeing how it’s contextualized, and bringing together solutions.    Seeing the forest, not the trees. I’m inclined to think that there are benefits to those who see connections, just as there is a need for those who can plug away at the details.  And it’s probably contextual; some folks will be one in one area and another in another.  For example, there are times I’m too detail oriented (e.g. fighting for conceptual clarity), and times where I’m missing connections (particularly in reading the politics of a situation).  And vice-versa, times when I’m not detail-0riented enough, and very good at seeing connections. They’re probably not ends of a spectrum, either, as I’ve gone away from that in practical matters (hmm, wonder what that implies about the Big 5?). Take introvert and extrovert, from a learning perspective it’s about how well you learn on your own versus how well you learn with others, and you could be good or bad at each or both.  Similarly here, you could be able to do both (as in my colleague, he’s one of the smartest folks I know who is demonstrably innovative and connecting as well as being able to sweat the details whether writing code or composing music). Or maybe this is all a post-hoc justification for wanting to play out at the conceptual frontier, but I’m not going to apologize for that.  It seems to work…
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:04am</span>
I was thinking about how to make meaningful practice, and I had a thought that was tied to some previous work that I may not have shared here.  So allow me to do that now. Ideally, our practice has us performing in ways that are like the ways we perform in the real world.  While it is possible to make alternatives available that represent different decisions, sometimes there are nuances that require us to respond in richer ways. I’m talking about things like writing up an RFP, or a response letter, or creating a presentation, or responding to a live query. And while these are desirable things, they’re hard to evaluate. The problem is that our technology to evaluate freeform text is difficult, let alone anything more complex.  While there are tools like latent semantic analysis that can be developed to read text, it’s complex to develop and  it won’t work on spoken responses , let alone spreadsheets or slide decks (common forms of business communication).  Ideally, people would evaluate them, but that’s not a very scalable solution if you’re talking about mentors, and even peer review can be challenging for asynchronous learning. An alternative is to have the learner evaluate themselves.  We did this in a course on speaking, where learners ultimately dialed into an answering machine, listened to a question, and then spoke their responses.  What they then could do was listen to a model response as well as their response.  Further, we could provide a guide, an evaluation rubric, to guide the learner in evaluating their response in respect to the model response (e.g. "did you remember to include a statement and examples"?). This would work with more complex items, too.  "Here’s a model spreadsheet (or slide deck, or document); how does it compare to yours?"  This is very similar to the types of social processing you’d get in a group, where you see how someone else responded to the assignment, and then evaluate. This isn’t something you’d likely do straight off; you’d probably scaffold the learning with simple tasks first.  For instance, in the example I’m talking about we first had them recognize well- and poorly-structured responses, then create them from components, and finally create them in text before having them call into the answering machine. Even then, they first responded to questions they knew they were going to get before tasks where they didn’t know the questions.  But this approach serves as an enriching practice on the way to live performance. There is another benefit besides allowing the learner to practice in richer ways and still get feedback. In the process of evaluating the model response and using an evaluation rubric, the learner internalizes the criteria and the process of evaluation, becoming a self-evaluator and consequently a self-improving learner.  That is, they use a rubric to evaluate their response and the model response. As they go forward, that rubric can serve to continue to guide as they move out into a performance situation. There are times where this may be problematic, but increasingly we can and should mix media and use technology to help us close the gap between the learning practice and the performance context. We can prompt, record learner answers, and then play back theirs and the model response with an evaluation guide.  Or we can give them a document template and criteria, take their response, and ask them to evaluate theirs and another, again with a rubric.  This is richer practice and helps shift the learning burden to the learner, helping them become self-learners.   I reckon it’s a good thing. I’ll suggest that you consider this as another tool in your repertoire of ways to create meaningful practice. What do you think?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:04am</span>
The following was prompted by a discussion on how education has the potential to be disrupted.  And I don’t disagree, but I don’t see the disruptive forces marshaling that I think it will take.  Some thoughts I lobbed in another forum (lightly edited): Mark Warschauer, in his great book Learning in the Cloud (which has nothing to do with ‘the cloud’), pointed out that there are only 3 things wrong with public education: the curricula, the pedagogy, and the way they use tech; other than that they’re fine. Ahem. And much of what I’ve read about disruption seems flawed in substantial ways. I’ve seen the for-profits institutions, and they’re flawed because even if they did understand learning (and they don’t seem to), they’re handicapped: they have to dance to the ridiculous requirements of accrediting bodies. Those bodies don’t understand why SMEs aren’t a good source of objectives, so the learning goals are not useful to the workplace. It’s not the profit requirement per se, because you could do good learning, but you have to start with good objectives, and then understand the nuances that make learning effective. WGU is at least being somewhat disruptive on the objectives. MOOCs don’t yet have a clear business model; right now they’re subsidized by either the public institutions, or biz experiments.  And the pedagogy doesn’t really scale well: their objectives also tend to be knowledge based, and to have a meaningful outcome they’d need to be application based, and you can’t really evaluate that at scale (unless you get *really* nuanced about peer review, but even then you need some scrutiny that just doesn’t scale.). For example, just because you learn to do AI programming doesn’t mean you’re ready to be an AI programmer.  That’s the xMOOCs, the cMOOCs have their own problems with expectations around self-learning skills.  Lovely dream, but it’s not the world I live in, at least yet. As for things like the Khan Academy, well, it’s a nice learning adjunct, and they’re moving to a more complete learning experience, but they’re still largely tied to the existing curricula (e.g. doing what Jonassen railed against: the problems we give kids in schools bear no relation to the problems they’ll face in the real world). The totally missed opportunity across all of this is the possibility of layering 21C skills across this in a systematic and developable way. If we could get a better curricula, focused on developing applicable skills and meta-skills, with a powerful pedagogy, in a pragmatically deliverable way… Lots of room for disruption, but it’s really a bigger effort than I’ve yet seen someone willing to take. And yet, if you did it right, you’d have an essentially unassailable barrier to entry: real learning done at scale. However, I’m inclined to think that it’s more plausible in the countries who increasingly ‘get’ that higher ed is an investment in the future of a country, and are making it free, and make it a ‘man on the moon’ program. I’m willing, even eager to be wrong on this, so please let me know what you think!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:04am</span>
I end up seeing a lot of different elearning. And, I have to say, despite my frequent disparagement, it’s usually well-written, the problem seems to be in the starting objectives.  But compared to learning that really has an impact: medical, flight, or military training for instance, it seems woefully under-practiced. So, I’d roughly (and generously) estimate that the ratio is around 80:20 for content: practice.  And, in the context of moving from ‘getting it right’ to ‘not getting it wrong’, that seems woefully inadequate.  So, two questions: do we just need more practice, or do we also have too much content. I’ll put my money on the latter, that is: both. To start, in most of the elearning I see (even stuff I’ve had a role in, for reasons out of my control), the practice isn’t enough.  Of course, it’s largely wrong, being focused on reciting knowledge as opposed to making decisisions, but there just isn’t enough.  That’s ok if you know they’ll be applying it right away, but that usually isn’t the case.  We really don’t scaffold the learner from their initial capability, through more and more complex scenarios, until they’re at the level of ability we want.  Where they’re performing the decisions they need to be making in the workplace with enough flexibility and confidence, and with sufficient retention until it’s actually needed.  Of course, it shouldn’t be the event model, and that practice should be spaced over time.  Yes, designing practice is harder than just delivering content, but it’s not that much harder to develop more than just to develop some. However, I’ll argue we’re also delivering too much content.  I’ve suggested in the past that I can rewrite most content to be 40% - 60% less than it starts (including my own; it takes me two passes).  Learners appreciate it.  We want a concise model, and some streamlined examples, but then we should get them practicing.  And then let the practice drive them to the content.  You don’t have to prepackage it as much, either; you can give them some source materials that they’ll be motivated to use, and even some guidance (read: job aids) on how to perform. And, yes, this is a tradeoff: how do we find a balance that both yields the outcomes we need but doesn’t blow out the budget?  It’s an issue, but I suggest that, once you get in the habit, it’s not that much more costly.  And it’s much more justifiable, when you get to the point of actually measuring your impact.  Which many orgs aren’t doing yet.  And, of course, we should. The point is that I think our ratio should really be 50:50 if not 20:80 for content:practice.  That’s if it matters, but if it doesn’t why are you bothering? And if it does, shouldn’t it be done right?  What ratios do you see? And what ratios do you think makes sense?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:04am</span>
The past two weeks, I’ve been on the road (hence the paucity of posts).  And they’ve been great opportunities to engage around interesting topics, but also have provided some learning opportunities (ahem).  The title of this post, by the way, came from m’lady, who was quoting what a senior Girl Scout said was the biggest lesson she learned from her leader, "to embrace Plan B" ;). So two weeks ago I was visiting a client working on upping their learning game. This is a challenge in a production environment, but as I discussed many times in posts over the second half of 2014 and some this year, I think there are some serious actions that can be taken.  What is needed are better ways to work with SMEs, better constraints around what makes useful content, and perhaps most importantly what makes meaningful interaction and practice.  I firmly believe that there are practical ways to get serious elearning going without radical change, though some initial hiccups will be experienced. This past week I spoke twice. First on a broad spectrum of learning directions to a group that was doing distance learning and wanted to take a step back and review what they’d been doing and look for improvement opportunities. I covered deeper learning, social learning, meta-learning, and more. Then I went beyond and talked about 70:20:10, measurement, games and simulations, mlearning, the performance ecosystem, and more.  I then moved on to a separate (and delightful) event in Vancouver to promote the Revolution. It was the transition between the two events last week that threw me. So, Plan A was to fly back home on Tuesday, and then fly on to Vancouver on Wed morning.   But, well, life happened.  All my flights were delayed (thanks, American) on my flight there and back to the first engagement, and both of the first flights such that I missed the connection. On the way out I just got in later than I expected (leading to 4.5 hours sleep before the long and detailed presentation).  But on the way back, I missed the last connecting flight home.  And this had several consequences. So, instead of spending Tuesday night in my own bed, and repacking for the next day, I spent the night in the Dallas/Fort Worth airport.  Since they blamed it on weather (tho’ if the incoming flight had been on time, it might’ve gotten out in time to avoid the storm), they didn’t have any obligation to provide accommodation, but there were cots and blankets available. I tried to pull into a dark and quiet place, but most of the good ones were taken already. I found a boarding gate that was out of the way, but it was bright and loud.  I gave up after an hour or so and headed off to another area, where I found a lounge where I could pull together a couple of armchairs and managed to doze for 2.5 or so hours, before getting up and on the hunt for some breakfast.  Lesson: if something’s not working, change! I caught a flight back home in just enough time to catch the next one up to Vancouver. The problem was, I wasn’t able to swap out my clothes, so I was desperately in need of some laundry.  Upon arriving, I threw one of the shirts, socks, etc into a sink and gave them a wash and hung them up. (I also took a shower, which was not only a necessity after a rough night but a great way to gather myself and feel a bit more human).  The next morning, as I went to put on the shirt, I found a stain!  I couldn’t get up in front of all those people with a stained shirt. Plan B was out the door. Also, the other shirt had acquired one too!  Plan C on the dust heap. Now what?  Fortunately, my presentation was in the afternoon, but I needed to do something. So I went downstairs and found a souvenir shop in the hotel, but the shirts were all a wee bit too loud.  I didn’t really want to pander to the crowd quite so egregiously. I asked at the hotel desk if there was a place I could buy a shirt within walking distance, and indeed there was.  I was well and truly on Plan D by this time. So I hiked on out to a store and fortunately found another shirt I could throw on.  Lesson: keep changing! I actually made the story part of my presentation.  I made the point that just like in my case, organizations need not only optimal execution of the plans, but then also the ability to innovate if the plan isn’t working.  And L&D can (and should) play a role in this.  So, help your people be prepared to create and embrace Plan B (and C and…however many adaptations they need to have). And one other lesson for me: be better prepared for tight connections to go awry!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:03am</span>
Why should one work out loud (aka Show Your Work)?  Certainly, there are risks involved.  You could be wrong.  You could have to share a mistake. Others might steal your ideas.  So why would anyone want to be Working Out Loud?  Because the risks are trumped by the benefits. Working out loud is all about being transparent about what you’re doing.  The benefits of these are multiple. First, others know what you’re doing, and can help. They can provide pointers to useful information, they can provide tips about what worked, and didn’t, for them, and they’re better prepared for what will be forthcoming. Those risks? If you’re wrong, you can find out before it’s too late.  If you share a mistake, others don’t have to make the same one.  If you put your ideas out there, they’re on record if someone tries to steal them.  And if someone else uses your good work, it’s to the general benefit. Now, there are times when this can be bad. If you’re in a Miranda organization, where anything you say can be held against you, it may not be safe to share.  If your employer will take what you know and then let you go (without realizing, of course, there’s more there), it’s not safe.  Not all organizations are ready for sharing you work. Organizations, however, should be interested in creating an environment where working out loud is safe.  When folks share their work, the organization benefits.  People know what others are working on. They can help one another.  The organization learns faster.  Make it safe to share mistakes, not for the sake of the mistake, but for the lesson learned; so no one else has to make the same mistake! It’s not quite enough to just show your work, however, you really want to ‘narrate’ your work. So working out loud is not just about what you’re doing, but also explaining why.  Letting others see why you’re doing what you’re doing helps them either improve your thinking or learn from it.  So not just your work output improves, but your continuing ability to work gets better too! You can blog your thoughts, microblog what you’re looking at, make your interim representations available as collaborative documents, there are many ways to make your work transparent. This blog, Learnlets, is just for that purpose of thinking out loud: so I can get feedback and input or others can benefit.  Yeah, there are risks (I have seen my blog purloined without attribution), but the benefits outweigh the risks.  That’s as an independent, but imagine if an organization made it safe to share; the whole organization learns faster. And that’s the key to the continual innovation that will be the only sustainable differentiator. Organizations that work together effectively are organizations that will thrive.  So there are personal benefits and organizational benefits.  And I personally think this is a role for L&D (this is part of the goal of the Revolution). So, work out loud about your efforts to work out loud! #itashare
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:03am</span>
It’s June, and June is Learning Styles month for the Debunker’s Club.  Now, I’ve gone off on Learning Styles before (here, here, here, and here), but  it’s been a while, and they refuse to die. They’re like zombies, coming to eat your brain! Let’s be clear, it’s patently obvious learners differ.  They differ in how they work, what they pay attention to, how they like to interact, and more. Surely, it make sense to adapt the learning to their style, so that we’re optimizing their outcome, right? Er, no.  There is no consistent evidence that adapting to learning styles works.  Hal Pashler and colleagues, on a study commissioned by Science in the Public Interest (read: a non-partisan, unbiased, truly independent work) found (PDF) that there was no evidence that adapting to learning styles worked. They did a meta-analysis of the research out there, and concluded this with statistical rigor.  That is, some studies showed positive effects, and some showed negative, but across the body of studies suitably rigorous to be worth evaluating, there was no evidence that trying to adapt learning to learner characteristics had a definitive impact. At least part of the problem is that the instruments people use to characterize learning styles are flawed.  Surely, if learners differ, we can identify how?  Not with psychometric validity (that means tests that stand up to statistical analysis). A commissioned study in the UK (like the one above, independent, etc) led by Coffield evaluated a representative sample of instruments (including the ubiquitous MBTI, Kolb, and more), and found (PDF) only one that met all four standards of psychometric validity. And that one was a simple one of one dimensions. So, what’s a learning designer to do?  Several things: first, design for what is being learned. Use the best learning design to accomplish the goal. Then, if the learner has trouble with that approach, provide help.  Second, do use a variety of ways of supporting comprehension.  The variety is good, even if the evidence to do so based upon learning style isn’t.  (So, for example, 4MAT isn’t bad, it’s just not based upon sound science, and why you’d want to pay to use a heuristic approach when you can do that for free is beyond me.) Learners do differ, and we want them to succeed. The best way to do that is good learning experience design. We do have evidence that problem-based and emotionally aware learning design helps.  We know we need to start with meaningful objectives, create deep practice, ground in good models, and support with rich examples, while addressing motivation, confidence, and anxiety.  And using different media maintains attention and increases the likelihood of comprehension.  Do good learning design, and please don’t feed the zombie.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:03am</span>
Displaying 11569 - 11592 of 43689 total records