Blogs
In a conversation, a colleague mentioned using social tools to minimize friction, and the thought struck me wrong. In thinking further, I realized that there were different notions of friction that needed to be teased out. So here’s where my thinking went.
I argue that what organizations need is creative friction. I have previously suggested that conversations are the engine of business. These are the discussions where ideas are sparked, and decisions are made. These are also the tools used to negotiate a new and shared understanding that is richer and better than it was before.
There are also unproductive conversations, e.g. meetings where we have status updates that are more effectively done offline, and jockeying for various sorts of recognition. These are reflections of bad processes and worse culture. There’s a mistaken view that brainstorming doesn’t work; it works well if you know the important elements that make it work, but if you follow misunderstood processes, it can not produce the optimal outcome. Similarly, if the culture is misaligned, it might be unsafe to share, or folks might be too busy competing.
However, it is when people are constructively interacting - not just pointing to useful resources or answering questions, but working together on a joint project - is when you are getting the important creative friction. Not that it’s bad when people point to useful resources or answer questions, that makes things more efficient. When folks come with different ideas, though, and jointly create a new insight, a new idea, a new product or process, that is when you’re providing the sparks necessary to help organizations succeed. Not all of them will be good, but if they’re new, some subset will likely be good.
If people aren’t sharing what they learn and discover, you might miss the new, or it might not really be new but have been previously discovered and not leveraged. That’s why you should work, and learn, out loud.
The issue then in friction is removing unproductive friction. If people have to be co-located to have these conversations, or don’t have tools to express their understandings and share their thoughts around each other’s work is when you unproductive barriers. I’m a fan of collaborative documents that support annotation and track contributions. Here we can share our ideas, and quickly converge on the elements of disagreement and resolve them. It may need to have periods of synchronous conversation as well as asynchronous work (we did this when creating the Manifesto).
So it seems to me that having the right culture, tools, and skills is the key to optimizing the innovative outcomes that will drive sustainability for organizations. Now how about some creative inputs to refine and improve this?
(And, at a meta-level, it was a conversation that triggered this deeper thought, just the type of outcome we want to facilitate!)
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:36am</span>
|
Neil Jacobstein gave the keynote for the special Future of Talent event sponsored by SAP and hosted by the Churchill Club. In a wide ranging and inspiring talk, Neil covered how new technologies, models, and methods provide opportunities to transcend our problems and create a world worth living in.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:36am</span>
|
Finally! Revolutionize Learning & Development (Performance and Innovation Strategy for the Information Age) is now shipping, (and has been available on Kindle for a couple of weeks), so at last you should be able to get your mitts on it if you’re so inclined. And, if you’re at all involved in trying to make your organization successful, I will immodestly suggest you might want to be so inclined.
Just as background, it documents my claim that organizational L&D isn’t doing what it could and should be doing, and what it is doing, it is doing badly. Then it lays out elements that should be considered, what it would look like if it were going well, and how you might get there. While the exact strategy for any one organization will be dependent on where they are and the nature of the business, there are some frameworks to help you apply those to your business. The goal is to move to Performance & Development, coupling optimal execution with continual innovation.
If you’re curious but not yet ready to dig in, let me mention a couple of things:
there’s a free sample available
I was interviewed by GameOn Learning’s Bryan Austin about the topic (shorter and more focused)
I was also interviewed by Rick Zanotti of Relate (longer and more wide-ranging)
And, if you’re going to be at ASTD’s International Conference in DC next week, I will be there and:
presenting about the issues on Monday the 5th of May at 4:30 PM in room 146B (session M313)
signing copies right after (5:30 PM) in the ASTD book store
holding an author chat on Wednesday May 7th at 9:45 AM also at the ASTD Bookstore
So, check it out, and see if it makes sense to you. Or you can just go ahead and get it. I hope to see you in DC, and welcome your feedback!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:36am</span>
|
A number of the Change Agents Worldwide resonated with an image of the 50 reasons not to change at work that was overlaid on an image of Doug Engelbart. The point, of course, is that all these reasons not to change provide a significant barrier to success. We decided that we’d line up and start a blog carnival with each of us addressing one. I decided to address number 45:
45. We’re doing all right as it is.
First, do you have evidence for that, or are you in denial? It’s easy to not want to change because it’ll be hard, so it’s easier to say that things are OK. It is particularly easy if you aren’t really checking! L&D, for instance, has been notoriously bad about seeing whether their interventions are actually achieving any impact (around 3% reporting that they actually go to level 4 on Kirkpatrick’s scale).
If you are checking, what are your benchmarks? Are you measuring just execution, or are you including innovation? Because, as I’ve said before, optimal execution is only going to let you survive, to thrive you’ll need continual innovation. If you’re doing what your strategy says, are you looking out for disruptive forces, and creating your own? You should be checking to see if you’re striking that balance of being able agile enough as well as productive enough.
Finally, is "all right" really good enough? Shouldn’t we be shooting for the best, not just good enough? Do you think your competitors are sitting complacent? We really should be looking for every edge. That doesn’t come from believing we’re doing all right. We should be looking for continual improvement.
Yes, it’s harder to not believe we’re doing all right as it is, but your curiosity should be driving you forward regardless. Your organization, if it isn’t continually learning, is declining. Are you really doing all right? If you’re definition of ‘all right’ is that you are continually curious and moving forward with experimentation, then I reckon so.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:35am</span>
|
I had the good fortune to be invited to the Future of Work event that was held here in Silicon Valley two weeks ago, and there were four breakouts, one of which was on learning and knowledge management. You can guess which one I was on (though tempted by the leadership and culture one; there was overlap).
Within that breakout the activity was to pick four topics and further break out. The issue was meeting workplace needs given the changing nature of work, and I suggested that perhaps the biggest need was to focus on skills that held true across domain, so called meta-cognitive skills, and learning to learn (a total surprise, right?). That was one that people were interested in, so that’s what we discussed.
We broke down learning into some component elements. We talked about how your beliefs about learning (your epistemological stance) mattered, as well as your intention to learn, and how effective you were at learning alone and with others. It also matters how well you use tools and external representations, as well as your persistence.
What emerged was that learning skills shouldn’t be taken for granted. And consequently, one of the attendees suggested that perhaps along with IQ and EQ, we should be looking at people’s LQ or learning quotient. I just saw an advertisement that said EQ (they called it EI Emotional Intelligence, probably to avoid trademark infringement) was better than IQ because you can improve your EQ scores. However, the evidence suggests you can improve your LQ scores too.
A decade ago now, Jay Cross and I were pushing the meta-learning lab, and I still think Jay was right claiming that meta-learning might be your best investment. So, are you aware of how you learn? Have you improved how you learn? Can you help others learn more effectively? I believe the answer is yes, and we not only can, but should.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:35am</span>
|
Arianna Huffington kicked off ASTD’s international conference with a very engaging presentation covering the four pillars to thrive. Alternately funny and wise, it was a great start.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:35am</span>
|
General Stan McChrystal gave an inspiring and insightful talk about adapting to change, based upon his experience.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:35am</span>
|
On Tuesday, with a big presentation, the American Society for Training & Development announced a rebranding. The new name is the Association for Talent Development, going from ASTD to ATD. And while this is a necessary move, I think it wasn’t the best change they could’ve made.
ASTD needed the change, for two reasons. For one, ASTD has membership in, and runs events, around the world. There may be other orgs (e.g. CSTD for Canada), but the 800 lb gorilla is ASTD. Second, training, while still a large proportion of what ASTD does (rightly or wrongly), is increasingly being joined by other approaches such as coaching and mentoring.
The first reason resonates, but I have a problem with the second. To put it another way, I believe that the change to Association makes sense, but Talent Development doesn’t. As I stated in Revolutionize Learning & Development, I believe that the necessary direction for organizations is to couple optimal performance with continual innovation. What’s required from Learning & Development, then, is to support all manners of performance and develop continual innovation.
What’s involved is not only to support training when it needs to be ‘in the head’, but using performance support when we can. And we need to develop and facilitate organizational innovation. The latter means not only developing individual (and group) ability to interact constructively, but to facilitate useful interactions of all sorts.
And here’s the rub. I see Talent Development as developing people through training, mentoring, and coaching, but I see the potential role for the folks in what now is termed L&D to be not only the development of people’s ability, but their ability to perform, even if it isn’t developing the person. That is, using performance support when it makes sense should be part of the unit’s responsibility, even when it doesn’t develop the person. Similarly, I see facilitating constructive interaction (curating resources, removing barriers to interaction and supporting tool use, etc), whether it develops people or not, as a vital role.
That’s the reason I chose to suggest, in the book, that the unit should be renamed Performance & Development; supporting both optimal execution and continual innovation in all relevant ways. The opportunity is to be the strategic organizational resource to ensure that all the intellectual resources of the organization are contributing.
And that is the reason I have a problem with Talent Development. To me, Talent Development is focused only on developing people instead of facilitating overall organization performance. And I think that’s falling short of the opportunity, and the need. Don’t get me wrong, I laud that ASTD made a change, and I think Talent Development is a good thing. Yet I think that our role can and should be more. I wish they’d thought a little broader, and covered all of the potential contributions. So, maybe, Association of Performance & Development or APD. Regardless, it’s a dynamic organization that offers a lot. I just wonder who’s going to fill the gaps.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:35am</span>
|
One of the things that emerged at the recent A(S)TD conference was that a particular gap might exist. While there are resources about learning design, performance support design, social networking, and more, there’s less guidance about facilitating innovation. Which led me to think a wee bit about what might be involved. Here’s a first take.
So, first, what are the elements of innovation? Well, whether you listen to Stephen Berlin Johnson on the story of innovation, or Keith Sawyer on ways to foster innovation, you’ll see that innovation isn’t individual. In previous work, I looked at models of innovation, and found that either you mutated an existing design, or meld two designs together. Regardless, it comes from working and playing well together.
The research suggests that you need to make sure you are addressing the right problem, diverge on possible solutions via diverse teams under good process, create interim representations, test, refine, repeat. The point being that the right folks need to work together over time.
The barriers are several. For one, you need to get the cultural elements right: welcoming diversity, openness to new ideas, safe to contribute, and time for reflection. Without being able to get the complementary inputs, and getting everyone to contribute, the likelihood of the best outcome is diminished.
You also shouldn’t take for granted that everyone knows how to work and play well together. Someone may not be able to ask for help in effective ways, or perhaps more likely, others may offer input in ways that minimize the likelihood that they’ll be considered. People may not use the right tools for the job, either not being aware of the full range (I see this all the time), or just have different ways of working. And folks may not know how to conduct brainstorming and problem-solving processes effectively (I see this as well).
So, the facilitation role has many opportunities to increase the quality of the outcome. Helping establish culture, first of all, is really important. A second role would be to understand and promote the match of tools to need. This requires, by the way, staying on top of the available tools. Being concrete about learning and problem-solving processes, and educating them and looking for situations that need facilitation, is another role Both starting up front and educating folks before these skills are needed are good, and then monitoring for opportunities to tune those skills are valuable. Finally, developing process facilitation skills, serving in that role or developing the skills, or both, are critical.
Innovation isn’t an event, it’s a process, and it’s something that I want P&D (Learning & Development 2.0 :) to be supporting. The organization needs it, and who better?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:34am</span>
|
I’ve been talking a bit recently about deepening formal design, specifically to achieve learning that’s flexible, persistent, and develops the learner’s abilities to become self-sustaining in work and life. That is, not just for a course, but for a curriculum. And it’s more than just what we talked about in the Serious eLearning Manifesto, though of course it starts there. So, to begin with, it needs to start with meaningful objectives, provide related practice, and be trialed and developed, but there’s more, there are layers of development that wrap around the core.
One element I want to suggest is important is also in the Manifesto, but I want to push a bit deeper here. I worked to put in that the elements behind, say, a procedure or a task, that you apply to problems, are models or concepts. That is, a connected body of conceptual relationships that tie together your beliefs about why it should be done this way. For example, if you’ve a procedure or process you want people to follow, there is (or should be) a rationale behind it.
And you should help learners discover and see the relationships between the model and the steps, through examples and the feedback they get on practice. If they can internalize the understanding behind steps, they are better prepared for the inevitable changes to the tools they use, the materials they work on, or the process changes what will come from innovation. Training them on X, when X will ultimately shift to Y, isn’t as helpful unless you help them understand the principles that led to performance on X and will transfer to Y.
Another element is that the output of the activities should create scrutable deliverables and also annotate the thoughts behind the result. These provide evidence of the thinking both implicit and explicit, a basis for mentors/instructors to understand what’s good, and what still may need to be addressed, tin the learner’s thinking. There’s also the creation of a portfolio of work which belongs to the learner and can represent what they are capable of.
Of course, the choices of activities for the learner initially, and the design of them to make them engaging, by being meaningful to the learner in important ways, is another layer of sophistication in the design. It can’t just be that you give the traditional boring problems, but instead the challenges need to be contextualized. More than that (which is already in the Manifesto), you want to use exaggeration and story to really make the challenges compelling. Learning should be hard fun.
Another layer is that of 21st Century skills (for examples, the SCANS competencies). These can’t be taught separately, they really need to manifest across whatever domain learnings you are doing. So you need learners to not just learn concepts, but apply those concepts to specific problems. And, in the requirements of the problem, you build in opportunities to problem-solve, communicate, collaborate, e.g. all the foundational and workplace skills. They need to reappear again and again and be assessed (and developed) separately.
Ultimately, you want the learner to be taking on responsibility themselves. Later assignments should include the learner being given parameters and choosing appropriate deliverables and formats for communication. And this requires and additional layer, a layer of annotation on the learning design. The learners need to be seeing why the learning was so designed, so that they can internalize the principles of good design and so become self-improving learners. You, for example, in reading this far, have chosen to do this as part of your own learning, and hopefully it’s a worthwhile investment. That’s the point; you want learners to continue to seek out challenges, and resources to succeed, as part of their ongoing self-development, and that comes by having seen learning design and been handed the keys at some point on the journey, with support that’s gradually faded.
The nuances of this are not trivial, but I want to suggest that they are doable. It’s a subtle interweaving, to be sure, but once you’ve got your mind around it (with scaffolded practice :), my claim is that it can be done, reliably and repeatedly. And it should. To do less is to miss some of the necessary elements for successful support of an individual to become the capable and continually self-improving learner that we need.
I touched on most of this when I was talking about Activity-Based Learning, but it’s worthwhile to revisit it (at least for me :).
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:34am</span>
|
For the current ADL webinar series on mobile, I gave a presentation on contextualizing mobile in the larger picture of L&D (a natural extension of my most recent books). And a question came up about whether I thought wearables constituted mobile. Naturally my answer was yes, but I realized there’s a larger issue, one that gets meta as well as mobile.
So, I’ve argued that we should be looking at models for guiding our behavior. That we should be creating them by abstracting from successful practices, we should be conceptualizing them, or adopting them from other areas. A good model, with rich conceptual relationships, provides a basis for explaining what has happened, and predicting what will happen, giving us a basis for making decisions. Which means they need to be as context-independent as possible.
So, for instance, when I developed the mobile models I use, e.g. the 4C’s and the applications of learning (see figure), I deliberately tried to create an understanding that would transcend the rapid changes that are characterizing mobile, and make them appropriately recontextualizable.
In the case of mobile, one of the unique opportunities is contextualization. That means using information about where you are, when you are, which way you’re looking, temperature or barometric pressure, or even your own state: blood pressure, blood sugar, galvanic skin response, or whatever else skin sensors can detect.
To put that into context (see what I did there): with desktop learning, augmenting formal could be emails that provide new examples or practice that spread out over time. With a smartphone you can do the same, but you could also have a localized information so that because of where you were you might get information related to a learning goal. With a wearable, you might get some information because of what you’re looking at (e.g. a translation or a connection to something else you know), or due to your state (too anxious, stop and wait ’til you calm down).
Similarly for performance support: with a smartphone you could take what comes through the camera and add it onto what shows on the screen; with glasses you could lay it on the visual field. With a watch or a ring, you might have an audio narration. And we’ve already seen how the accelerometers in fit bracelets can track your activity and put it in context for you.
Social can not only connect you to who you need to know, regardless of device or channel, but also signal you that someone’s near, detecting their face or voice, and clue you in that you’ve met this person before. Or find someone that you should meet because you’re nearby.
All of the above are using contextual information to augment the other tasks you’re doing. The point is that you map the technology to the need, and infer the possibilities. Models are a better basis for elearning, too so that you teach transferable understandings (made concrete in practice) rather than specifics that can get outdated. This is one of the elements we placed in the Serious eLearning Manifesto, of course. They’re also useful for coaching & mentoring as well, as for problem-solving, innovating, and more.
Models are powerful tools for thinking, and good ones will support the broadest possible uses. And that’s why I collect them, think in terms of them, create them, and most importantly, use them in my work. I encourage you to ensure that you’re using models appropriately to guide you to new opportunities, solutions, and success.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:34am</span>
|
I was in a recent conversation about a company facing strong growth and worried about the impact on culture. Companies with a positive culture, a valuable offering, and a good business model are liable to face growth issues, and maintaining or starting a good culture becomes a critical issue to maintaining the organization’s success.
This company had a positive culture, in that people were diverse, friendly, upbeat, and committed to contributing. These are all positive elements that had led to the early success. Growth, both through hiring and acquisitions, was leading to concerns about the ability for those factors to continue.
One of the things that wasn’t obvious from the initial portrayal of the company was whether folks there were capturing and sharing what they were doing, how they were working, what challenges they were facing, and what results they were seeing. In a small company, this happens naturally through conversation, but face to face communication isn’t scalable.
One obvious possibility is to implement or more systematically leverage an enterprise social network (ESN; essentially using social media in the org). Working out loud, as it’s known, has many benefits. As people share their work, others can comment and improve it. People can ask for help and get collaboration on those new problems and innovation needs that are increasingly arising. Mistakes can be made and the lessons learned can be shared so no others have to make the same mistakes.
One of the offshoot benefits of such sharing is that it takes the positive cultural attributes already being shown and makes them visible (if implicitly) as well. It’s not guaranteed, but with an awareness of the behaviors and manifestations of culture through the network, a systematic process could lead to that positive culture scaling and yield those additional benefits that accompany working out loud.
It takes all the elements of a learning culture and organizational change, of course. You need to continue to welcome diversity, be open to new ideas, and have it safe to contribute. You also need to develop a vision, message it, have the leadership model it, facilitate it, anticipate problems and be prepared to address them, and ultimately reward the desired outcomes. But this is doable.
The benefits of a positive culture are becoming known, and the value of social networks are also emerging. Linking them together is not only necessary, but the benefits are more than the sum of the parts.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:33am</span>
|
I’ve been thinking about the deep challenge of motivating uninterested learners. To me, at least part of that is making the learning of intrinsic interest. And one of those elements is practice, and this is arguably the most important element to making learning work. So how to do we make practice intrinsically interesting?
One of the challenging but important components of designing meaningful practice is choosing a context in which that practice is situated. It’s really about finding a story line that makes the action meaningful to both the learner and the learning. It’s creative (and consequently fun), but it’s also not intrinsically obvious (which I’ve learned after trying to teach it in both game design and advanced ID workshops). There are heuristics to be followed (there’s no guaranteed formula except brainstorm, winnow, trial, and refine), however, that can be useful.
While Subject Matter Experts (SMEs) can be the bane of your existence while setting learning goals (they have conscious access to no more than 30% of what they do, so they tend to end up reciting what they know, which they do have access to), they can be very useful when creating stories. There’s a reason why they’ve spent the requisite time to become experts in the field, and that’s an aspect we can tap into. Find out why it’s of interest to them. In one instance, when asking experts about computer auditing, a colleague found that auditors found it like playing detective, tracking back to find the error. It’s that sort of insight upon which a good game or practice exercise can hinge.
One of the tricks to work with SMEs is to talk about decisions. I argue that what is most likely to make a difference to organizations is that people make better decisions, and I also believe that using the language of decisions helps SMEs focus on what they do, not what they know. Between your performance gap analysis of the situation, and expert insight into what decisions are key, you’re likely to find the key performances you want learners to practice.
You also want to find out all the ways learners go wrong. Here you may well hear instructors and/or SMEs say "no matter what we do, they always…". And that’s the things you want to know, because novices don’t tend to make random errors. Yes, there’s some, owing to our cognitive architecture (it’s adaptive), which is why it’s bad to expect people to do rote things, but it’s a small fraction of mistakes. Instead, learners make patterned mistakes based upon mistakes in their conceptualizations of the performance, aka misconceptions. And you want to trap those because you’ll have a chance to remediate them in the learning context. And they make the challenge more appropriately tuned.
You also need the consequences of both the right choice and the misconceptions. Even if it’s just a multiple choice question, you should show what the real world consequence is before providing the feedback about why it’s wrong. It’s also the key element in scenarios, and building models for serious games.
Then the trick is to ask SMEs about all the different settings in which these decisions embed. Such decisions tend to travel in packs, which is why scenarios are better practice than simple multiple choice, just as scenario-based multiple choice trumps knowledge test. Regardless, you want to contextualize those decisions, and knowing the different settings that can be used gives you a greater palette to choose from.
Finally, you’ll want to decide how close you want the context to be to the real context. For certain high-stakes and well-defined tasks, like flying planes or surgery, you’ll want them quite close to the real situation. In other situations, where there’s more broad applicability and less intrinsic interest (perhaps accounting or project management), you may want a more fantastic setting that facilitates broader transfer.
Exaggeration is a key element. Knowing what to exaggerate and when is not yet a science, but the rule of thumb is leave the core decisions to be based upon the important variables, but the context can be raised to increase the importance. For example, accounting might not be riveting but your job depends on it. Raising the importance of the accounting decision in the learning experience will mimic the importance, so you might be accounting for a mob boss who’ll terminate your existence if you don’t terminate the discrepancy in his accounts! Sometimes exaggeration can serve a pedagogical purpose as well, such as highlighting certain decisions that are rare in real life but really important when they occur. In one instance, we had asthma show up with a 50% frequency instead of the usual ~15%, as the respiratory complications that could occur required specific approaches to address.
Ultimately, you want to choose a setting in which to embed the decisions. Just making it abstract decreases the impact of the learning, and making it about knowledge, not decisions, will render it almost useless, except for those rare bits of knowledge that have to absolutely be in the head. You want to be making decisions using models, not recalling specific facts. Facts are better off put in the world for reference, except where time is too critical. And that’s more rare than you’d expect.
This may seem like a lot of work, but it’s not that hard, with practice. And the above is for critical decisions. In many cases, a good designer should be able to look at some content and infer what the decisions involved should be. It’s a different design approach then transforming knowledge into tests, but it’s critical for learning. Start working on your practice items first, aligned with meaningful objects, and the rest will flow. That’s my claim, what say you?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:33am</span>
|
Last week, Don Kirkpatrick passed away. Known for his four ‘levels‘ of measuring learning, he’s been hailed and excoriated. And it’s instructive to see why on both sides.
He derived his model as an approach to determine the impact of an intervention on organizational performance. He felt that you worked backward from the change you needed, to determine whether the workplace performance was changing, as then to see if that could be attributed to the training, and ultimately to the learner. He numbered his steps so that step 1 was seeing what learners thought, 2 was that learners could demonstrate a change, 3 was that the change was showing up in the workplace post intervention, and 4 was it impacting business measures.
This actually made a lot of sense. Rather than measuring the cost of hour of seat time or some other measure of efficiency, or, worse, not measuring at all, here was a plan that was designed to focus on meaningful change that the business needed. It was obvious, and yet also obviously needed. So his success in bringing awareness to the topic of business impact is to be lauded.
There were two major problems, however. For one, having numbered it the way that it was, people seemed that they could take a partial attempt. Research shows that the number of people would only do step 1 or 2, and these are useless without ultimately including 4. He even later wondered if he should have numbered the approach in the reverse. The numbers have been documented (from a presentation with results from the ASTD Benchmarking Forum) as dropping in implementation from 94% doing level 1, 34% doing level 2, 13% doing level 3, and 3% doing level 4. That’s not the idea!
The second problem was that whether or not he intended it (and there are reasons to believe he didn’t), it become associated only with training interventions. Performance support interventions or social network outcomes could similarly be measured (at least on levels 3 and 4), yet the language was all about training, which made it easy for folks to wrongly conclude that training was your only tool. And we still see folks using courses as the only tool in their repertoire, which just isn’t aligned with how we think, work, and learn (hence the revolution).
Kirkpatrick rode this tool for the rest of his career, created a family business in it, and he wasn’t shy about suggesting that you buy a book to learn about it. I certainly can’t fault him for it either, as he did have a sensible model and it could be put into effective use. There are worse ways to earn a living.
Others have played upon his model. The Phillips have made a similar career with their fifth level, ROI, measuring the cost of impacting level 4 against the value of the impact. Which isn’t a bad move to make after you focus on making an impact. Similarly, a client opined that there was also level 0, are the learners even showing up for the training!
In assessing the impact, part of me is mindful that tools can be used for good or ill. Powerpoint doesn’t kill people, people do, as the saying goes. Still, Kirkpatrick could’ve renumbered the steps, or been more outspoken about the problems with just step 1.
So, I laud his insight, and bemoan the ultimate lack of impact. However, I reckon it’s better to argue about it than be ignorant. Rest in peace.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:33am</span>
|
A number of years ago, I said that the problem for publishers was not going from text to content (as the saying goes), but from content to experience. I think elearning designers have the same problem: they are given a knowledge dump, and have to somehow transform that into an effective experience. They may even have read the Serious eLearning Manifesto, and want to follow it, but struggle with the transition or transformation. What’s a designer to do?
The problem is, designers will be told, "we need a course on this", and given a dump of Powerpoints (PPTs), documents (PDFs), and maybe access to a subject matter expert (SME). This is all about knowledge. Even the SME, unless prompted carefully otherwise, will resort to telling you the knowledge they’ve learned, because they just don’t have access to what they know. And this, by itself, isn’t a foundation for a course. Processing the knowledge, comprehending it, presenting it, and then testing on acquisition (e.g. what rapid elearning tools make easy), isn’t going to lead to a meaningful outcome. Sorry, knowledge isn’t the same as ability to perform.
And this ignores, of course, whether this course is actually needed. Has anyone checked to see that if the skills associated with this knowledge have a connection with a real workplace performance issue? Is the performance need a result of a lack of skills? And is this content aligned to that skill? Too often folks will ask for a course on X when the barrier is something else. For instance, if the content is a bunch of knowledge that somehow you’re to magically put in someone’s head, such as product information or arbitrary rules, you’re far better off putting that information in the world than trying to put it in the head. It’s really hard to get arbitrary information in the head. But let’s assume that there is a core skill and workplace need for the sake of this discussion.
The key is determining what this knowledge actually supports doing differently. The designer needs to go through that content and figure out what individuals will be able to do that they can’t do now (that’s important), and then develop practice doing that. This is so important that, if what they’ll be able to do differently, isn’t there, there should be push back. While you can talk to the SME (trying to get them to talk in terms of decisions they can make instead of knowledge), you may be better off inferring the decisions and then verifying and refining with the SME. If you have access to several SMEs, better yet get them in a room together and just facilitate until they come up with the core decisions, but there are many situations where that’s not feasible.
Once you have that key decision, the application of the skill in context, you need to create situations where learners can practice using it. You need to create scenarios where these decisions will play out. Even just better written multiple choice questions that have: story setting, situation precipitating decision, decision alternatives that are ways in which learners might go wrong, consequences of the decisions, and feedback. These practice attempts are the core of a meaningful learning experience. And there’s even evidence that putting problems up front or at core is a valuable practice. You also want to have sufficient practice not just ’til they get it right, but until they have a high likelihood of not getting it wrong.
One thing that might not be in the PDFs and PPTs are examples. It’s helpful to get colorful examples of someone using information to successfully solve a problem, and also cases where they misapplied it and failed. Your SME should be able to help you here, telling you engaging stories of wins and losses. They may be somewhat resistant to the latter; worst case have them tell them about someone else.
The content in the PDFs and PPTs then gets winnowed down into just the resource material that helps the learner actually able to do the task, to successfully make the decision. Consider having the practice set in a story, and the content is available through the story environment (e.g. casebooks on the shelves for examples, a ‘library’ for concepts). But even if you present the (minimized) content and then have practice, you’ve shifted from knowledge dump/test to more of a flow of experience. The suite of meaningful practice, contextualized well and made meaningful with a wee bit of exaggeration and careful alignment with learner’s awareness, is the essence of experience.
Yes, there’s a bit more to it than that, but this is the core: focus on do, not dump. And, once you get in the habit, it shouldn’t take longer, it just takes a change in thinking. And even if it does, the dump approach isn’t liable to lead to any meaningful learning, so it’s a waste of time anyway. So, create experiences, not content.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:32am</span>
|
Like others, I have been seduced by the "what X are you" quizzes on FaceBook. I certainly understand why they’re compelling, but I’ve begun to worry about just why they’re so prevalent. And I’m a wee bit concerned.
People like to know things about themselves. Years ago, when we built an adaptive learning system (it would profile you versus me, and then even if we took the same course we’d be likely to have a different experience), we realized we’d need to profile learning a priori. That is, we’d ask an initial suite of questions, and that’d prime the system. (And we ultimately intended this profiling to be a game, not a set of quiz questions). Ultimately that initial model built by the questions would get refined by learner behavior in the system (and we also intended a suite of interventions ‘layered’ on top that would help improve learner characteristics that were malleable).
The underlying mission given us by my CEO was to help learners understand themselves as learners, and use that to their advantage. So, in addition to asking the questions, we’d share with them what we’d learned about them as learners. The notion was what we irreverently termed the ‘Cosmo quiz’, those quizzes that appeared in Cosmopolitan magazine about "how good a Y" you are, where one takes quizzes and then adds up the score.
Fast forward to now, and I began to wonder about these quizzes. They seem cute and harmless, but without seeing all the possible outcomes, it certainly seemed like it might not take that many questions to determine which one you’d qualify as. Yes, in good test design, you ask a question a number of times to disambiguate. But it occurred to me that if you could use fewer questions (and the outcomes are always written intriguingly so you don’t necessarily mind which you become), and then what are the other questions being used for. And the outcomes here don’t really matter!
So, it’d be real easy to insert demographic questions and use that information (presumably en masse) to start profiling markets. If you know other information about these people, you can start aggregating data and mining for information. One question I saw, for instance, ask you to pick which setting (desert, jungle, mountain, city), etc. Could that help recommend vacations to you?
When I researched these quizzes, rather than finding concerns about the question data, instead I found that much more detailed information about your account was allowed to be passed from Facebook to the quiz hosted. Which is worse! Even if not, I begin to worry that while they’re fun, what’s the motivation to keep creating new ones? What’s the business relationship? And I think it’s data.
Now, getting better data means you might get more targeted advertising. And that might be preferable than random (I’ve seen some pretty fun complaints about "what made them think this was for me"). But I don’t feel like giving them that much insight. So I’m not doing any more of those. I don’t think they really know what animal/movie character/color/fruit/power tool I am. If you want to know, ask me.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:32am</span>
|
I’ve found myself picking up books about how to change culture, as it seems to be the big barrier to a successful revolution. I’ve finished a quick read of Scaling Up Excellence, am in the midst of Change the Culture, Change the Game, and have Reinventing Organizations and Organize for Complexity (the latter two recommended by my colleague Harold Jarche) on deck. Here are my notes on the first.
Scaling Up Excellence is the work of two Stanford professors who have looked for years at what makes organizations succeed, particularly when they need to grow, or seed a transformation. They’ve had the opportunity to study a wide variety of companies, most as success stories, but they do include some cautionary tales as well. Fortunately, this doesn’t read like an academic book, and while it’s not equipped with formulas, there are overarching principles that have been extracted.
The overarching principle is that scaling is "a ground war, not an air war". What they mean is that you can’t make a high level decision and expect change to happen. It requires hard work in the trenches. Leaders have to go in, figure out what needs to change, and then lead that change. Using a religious metaphor, they distinguish between Buddhist and Catholic approaches, where you’re either wanting everyone to follow the same template, or modify it to their unique situation. Some organizations need to replicate a particular customer experience (think fast food), whereas others will need to be more accommodating to unique situations (think high-end retailers).
There are some principles around scaling, such as getting mental buy-in, helping people see the bigger picture and how the near term necessities are tied into that, and that going slow initially may help things go better. An interesting one, to me, is that accountability is a key factor; you can’t have folks sit on the side lines, and no slackers (let alone those who undermine).
Another suite of principles include cutting the cognitive load to getting things done the right way, mixing together emotional issues with clever approach, connecting people. One important element is of course allegiance, where people believe in the organization and it’s clear the organization is also believing in the people. No one’s claiming this is easy, but they have lots of examples and guidance.
One really neat idea that I haven’t heard before was the concept of a pre-mortem, that is, imagining a period some time in the future and asking "why did it go right", and also "why did it go wrong". A nice way to distance oneself from the moment and reflect effectively on a proposed plan. If separate groups do this, the inputs can help address potential risks, and emphasize useful actions.
I worry a bit that it’s still ‘old school’ business, (more on that after I finish the book I’m currently reading and look to the two ‘new thinking’ books), but they do seem to be pushing the values of doing meaningful work and sharing it. A bit discursive, but overall I thought it insightful.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:32am</span>
|
I previously wrote about Sutton & Rao’s Scaling up Excellence, and have now finished a quick read of Connors & Smith’s Change the Culture, Change the Game. Both books cover roughly the same area, but in very different ways. Sutton & Rao’s was very descriptive of the changes they observed and the emergent lessons. Connors & Smith, on the other hand, are very prescriptive. Yet both are telling similar stories with considerable overlap.
Let’s be clear, Connors & Smith have a model they want to sell you. You get the model up front, and then implementation tools in the second half. Of course, you aren’t supposed to actually try this without having their help. As long as you’re clear on this aspect of the book, you can take the lessons learned and decide whether you’d apply them yourself or use their support.
They have a relatively clear model, that talks about the results you want, the actions people will have to take to get to the results, the beliefs that are needed to guide those actions, and the experiences that will support those beliefs. They aptly point out that many change initiatives stop at the second step, and don’t get the necessity of the subsequent two steps. It’s a plausible story and model, where the actions, beliefs, and experiences are the elements that create the culture that achieves the results.
Like Kirkpatrick’s levels, the notion is that you start with the results you need, and work backward. Further, everything has to be aligned: you have to determine what actions will achieve the new results, and then what new beliefs can guide those new actions, and ultimately what experiences are needed to foster those new beliefs. You work rigorously to only focus on the ones that will make a difference, recognizing that too much will impact the outcome.
The second half talks about tools to foster these steps. There are management tools, leadership skills, and integration steps. There’s necessary training associated with these, and then coaching (this is the sales bit). It’s very formulaic, and makes it sound like close adherence to these approaches will lead to success. That said, there is a clear recognition that you need to continually check on how it’s going, and be active in making things happen.
And this is where there’s overlap with Sutton & Rao: it’s about ongoing effort, it requires accountability (being willing to take ownership of outcomes), people must be engaged and involved, etc. Both are different approaches to dealing with the same issue: working systematically to make necessary changes in an organization. And in both cases, the arguments are pretty compelling that it takes transparency and commitment by the leadership to walk the talk. It’s up to the executives to choose the needed change, but the empowerment to find ways to make that happens is diffused downward.
Whether you like the more organic approach of Sutton & Rao or the more formulaic model of Connors & Smith, you will find insight into the elements that facilitate change. For me, the synergy was nice to see. Now we’ll see if these are still old-school by comparison to Laloux’s Reinventing Organizations, that has received strong support from some colleagues I have learned to trust.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:31am</span>
|
I was reflecting on the two books I recently wrote about, Scaling Up and Changing the Game, versus the cultural approach of the Learning Organization I wrote about years ago (and refer to regularly). The thing is that both of the new books are about choosing either a very specific needed change, whether determined by fiat or based upon something already working well, whereas the earlier work identified general characteristics that make sense. And my thought was when does each make sense? More importantly, what is the role of Learning & Development (L&D; which really should be P&D or Performance & Development) in each?
If an organization is in need of a shakeup, so that a particular unit is underperforming, or a significant shift in the game has been signaled by new competition or a technology/policy/social change, the targeted change makes sense. As I suggested, some of the required elements from the more general approach are implicit or explicit, such as facilitating communication. The role here for L&D, then, is to support the training required for executives leading the shift in terms of communicating and behaving, as well as ongoing coaching. Similarly for the behaviors of employees, and watching for signs of resistance, in general facilitating the shift. However, the locus of responsibility is the executive team in charge of the needed change.
On the other hand, if the organization is being moderately successful, but isn’t optimized in terms of learning, there’s a case for a more general shift. If the culture doesn’t have the elements of a real learning organization - safe to share, valuing diversity, openness to new ideas, time for reflection - then there’s a case to be made for L&D to lead the charge on the change. Let’s be clear, it cannot be done without executive buy-in and leadership, but L&D can be the instigator in this case. L&D here sells the benefits of the change, supports leadership in execution both by training if necessary and coaching, and again coaches the change.
Regardless, L&D should be instigating this change within their own unit. It’s going to lead to a more effective L&D unit, and there’re the benefits of walking the walk as a predecessor to talking the talk.
Ultimately, L&D needs to understand effective culture and the mechanisms to culture change, as well as facilitating social learning, performance consulting, information architecture, resource design, and of course formal learning design. There’re new roles and new skillsets to be mastered on the path to being an effective and strategic contributor to the organization, but the alternative is extinction, eh?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:31am</span>
|
In the past, it has been the role of L&D to ascertain the resources necessary to supporting performance in the organization. Finding the information, creating the resources, and making them available has often been a task that either results in training, or complements it. I want to suggest, however, that the time has changed and a new strategy may be more effective, at least in many instances.
Creating resources is hard. We’ve seen the need to revisit the principles of learning design because despite the pleas that "we know this stuff already", there are still too many bad elearning courses out there. Similarly with job aids, there are skills involved in doing it right. Assuming those skills is a mistake.
There’s also the situation that creating resources is time consuming. The time spent doing this may be better spent in other approaches. There are plenty of needs that need to be addressed without finding more work.
On the flip side, there are now so many resources out there about so many things, that it’s not hard to find an answer. Finding good answers, of course, is certainly more problematic than just finding an answer, but there are likely answers out there.
The integration here is to start curating resources, not creating them. They might come internally, from the employees, or from external resources, but regardless of provenance, if it’s out there, it saves your resources for other endeavors.
The new mantra is Personal Knowledge Mastery, and while that’s for the individual, there’s a role for L&D here too: practicing ‘representative knowledge mastery’, as well as fostering PKM for the workforce. You should be monitoring feeds relevant to your role and those you’re responsible for facilitating. You need to practice it to be able to preach it, and you should be preaching it.
The point is to not be recreating resources that can be found, conserving your energy for those things that are business critical. One organization has suggested that they only create resources for internal culture, everything else is curated. Certainly only proprietary material should be the focus.
So, curate over create. Create when you have to, but only then. Finding good answers is more efficient than generating them.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:31am</span>
|
Too often, Learning & Development (L&D) is looking to provide all the answers. They work to get the information from SMEs, and create courses around it. They may also create performance support resources as well. And yet there are principled and pragmatic reasons why this doesn’t make sense. Here’s what I’m thinking.
On principle, the people working closest to the task are likely to be the most knowledgeable about it. The traditional role of information from the SME has been to support producing quality outputs, but increasingly there are tools that let the users create their own resources easily. The answer can come in the moment from people connected by networks, not having to go through an explicit process. And, as things are becoming more ambiguous and unique, this makes the accuracy to the context more likely as workers share their contexts and get targeted responses.
This doesn’t happen without facilitation. It takes a culture where sharing is valued, where people are connected, and have the skills to work well together. Those are roles L&D can, and should, play. Don’t assume that the network will be viable to begin with, or that people know how to work and play well together. Also don’t assume that they know how to find information on their own. The evidence is that these are skills that need to be developed.
The pragmatic reasons are those about how L&D has to meet more needs without resources. If people can self-help, L&D can invest resources elsewhere. I suggest that curation trumps creation, in that finding the answer is better than creating it, if possible.
When I talk about these possibilities, one of the reliable responses is "but what if they say the wrong thing?" And my response is that the network becomes self-correcting. Sure, networks require nurturing until they reach that stage, but again it’s a role for L&D. Initially, someone may need to be scrutinizing what comes through, and extolling experts to keep it correct, but eventually the network, with the right culture, support, and infrastructure, becomes a self-correcting and sustaining resource.
Work so that performers get their answers from the network, not from your work. When possible, of course.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:31am</span>
|
I’ve been a fan of Jane Hart since I met her through Jay Cross and we joined together in the ITA (along with colleagues Harold Jarche and Charles Jennings). And I’d looked at the previous edition of her Social Learning Handbook, so it was on faith that I endorsed the new edition. So I took a deeper look recently, and my faith is justified; this is a great resource!
Jane has an admirable ability to cut through complex concepts and make them clear. She cites the best work out there when it is available, and comes with her own characterizations when necessary. The concepts are clear, illustrated, and comprehensible.
This isn’t a theoretical treatment, however. Jane has pragmatic checklists littered throughout as well as great suggestions. Jane is focused on having you succeed. Practical guidance underpins all the frameworks.
I’m all the more glad I recommended this valuable compendium. If you want to tap into the power of social learning, there is no better guide.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:30am</span>
|
Larry Irving kicked off the mLearnCon with an inspiring talk about the ways in which technology can disrupt education. His ideas about VOOCs and nanodegrees were intriguing, and wish he’d talked more about adaptive learning. A great kickoff to the event.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:30am</span>
|
Karen McGrane evangelized good content architecture (a topic near to my heart), in a witty and clear keynote. With amusing examples and quotes, she brought out just how key it is to move beyond hard wired, designed content and start working on rule-driven combinations from structured chunks. Great stuff!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:30am</span>
|