Blogs
I tout the value of learning science and good design. And yet, I also recognize that to do it to the full extent is beyond most people’s abilities. In my own work, I’m not resourced to do it the way I would and should do it. So how can we strike a balance? I believe that we need to use smart heuristics instead of the full process.
I have been talking to a few different people recently who basically are resourced to do it the right way. They talk about getting the right SMEs (e.g. with sufficient depth to develop models), using a cognitive task analysis process to get the objectives, align the processing activities to the type of learning objective, developing appropriate materials and rich simulations, testing the learning and using feedback to refine the product, all before final release. That’s great, and I laud them. Unfortunately, the cost to get a team capable of doing this, and the time schedule to do it right, doesn’t fit in the situation I’m usually in (nor most of you). To be fair, if it really matters (e.g. lives depend on it or you’re going to sell it), you really do need to do this (as medical, aviation, military training usually do).
But what if you’ve a team that’s not composed of PhDs in the learning sciences, your development resources are tied to the usual tools, your budgets far more stringent, and schedules are likewise constrained? Do you have to abandon hope? My claim is no.
I believe that a smart, heuristic approach is plausible. Using the typical ‘law of diminishing returns’ curve (and the shape of this curve is open to debate), I suggest that it’s plausible that there is a sweet spot of design processes that gives you an high amount of value for a pragmatic investment of time and resources. Conceptually, I believe you can get good outcomes with some steps that tap into the core of learning science without following the letter. Learning is a probabilistic game, overall, so we’re taking a small tradeoff in probability to meet real world constraints.
What are these steps? Instead of doing a full cognitive task analysis, we’ll do our best guess of meaningful activities before getting feedback from the SME. We’ll switch the emphasis from knowledge test to mini- and branching-scenarios for practice tasks, or we’ll have them take information resources and use them to generate work products (charts, tables, analyses) as processing. We’ll try to anticipate the models, and ask for misconceptions & stories to build in. And we’ll align pre-, in-, and post-class activities in a pragmatic way. Finally, we’ll do a learning equivalent of heuristic evaluation, not do a full scientifically valid test, but we’ll run it by the SMEs and fix their (legitimate) complaints, then run it with some students and fix the observed flaws.
In short, what we’re doing here are approximations to the full process that includes some smart guesses instead of full validation. There’s not the expectation that the outcome will be as good as we’d like, but it’s going to be a lot better than throwing quizzes on content. And we can do it with a smart team that aren’t learning scientists but are informed, in a longer but still reasonable schedule.
I believe we can create transformative learning under real world constraints. At least, I’ll claim this approach is far more justifiable than the too oft-seen approach of info dump and knowledge test. What say you?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:23am</span>
|
I’ve been on quite the roll of late, calling out some bad practices and calling for learning science. And it occurs to me that there could be some pushback. So let me be clear, I strongly suggest that the types of learning that are needed are not info dump and knowledge test, by and large. What does that mean? Let’s break it down.
First, let me suggest that what’s going to make a difference to organizations is not better fact-remembering. There are times when fact remembering is needed, such as medical vocabulary (my go-to example). When that needs to happen, tarted up drill-and-kill (e.g .quiz show templates, etc) are the way to do it. Getting people to remember rote facts or arbitrary things (like part names) is very difficult. And largely unnecessary if people can look it up, e.g. the information is in the world (or can be). There are some things that need to be known cold, e.g. emergency procedures, hence the tremendous emphasis on drills in aviation and the military. Other than that, put it in the world, not the head. Look up tables, info sheets, etc are the solution. And I’ll argue that the need for this is less than 5-10% of the time.
So what is useful? I’ll argue that what is useful is making better decisions. That is, the ability to explain what’s happened and react, or predict what will happen and make the right choice as as consequence. This comes from model-based reasoning. What sort of learning helps model-based reasoning? Two types, in a simple framework. You need to process the models to help them be comprehended, and use them in context to make decisions with the consequences providing feedback. Yes, there likely will be some content presentation, but it’s not everything, and instead is the core model with examples of how it plays out in context. That is, annotated diagrams or narrated animations for the models; comic books, cartoons, or videos for the examples. Media, not bullet points.
The processing that helps make models stick includes having learners generate products: giving them data or outcomes and having them develop explanatory models. They can produce summary charts and tables that serve as decision aids. They can create syntheses and recommendations. This really leads to internalization and ownership, but it may be more time-consuming than worthwhile. The other approach is to have learners make predictions using the models, explaining things. Worst case, they can answer questions about what this model implies in particular contexts. So this is a knowledge question, but not a "is this an X or a Y", but rather "you have to achieve Z, would you use approach X, or approach Y".
Most importantly, you need people to use the models to make decisions like they’ll be making in the workplace. That means scenarios and simulations. Yes, a mini-scenario of one question is essentially a multiple choice (though better written with a context and a decision), but really things tend to be bundled up, and you at least need branching scenarios. A series of these might be enough if the task isn’t too complex, but if it’s somewhat complex, it might be worth creating a model-based simulation and giving the learners lots of goals with it (read: serious game).
And, don’t forget, if it matters (and why are you bothering if it doesn’t), you need to practice until they can’t get it wrong. And you need to be facilitating reflection. The alternatives to the right answer should reflect ways learners often go wrong, and address them individually. "No, that’s not correct, try again" is a really rude way to respond to learner actions. Connect their actions to the model!
What this also implies is that learning is much more practice than content presentation. Presenting content and drilling knowledge (particularly in about an 80/20 ratio), is essentially a waste of time. Meaningful practice should be more than half the time. And you should consider putting the practice up front and driving them to the content, as opposed to presenting the content first. Make the task make the content meaningful.
Yes, I’m making these numbers up, but they’re a framework for thinking. You should be having lots of meaningful practice. There’s essentially no role for bullet points or prose and simplistic quizzes, very little role for tarted up quizzes, and lots of role for media on the content side and branching scenarios and model-driven interactions on the interaction side. This kind of is an inverse of the tools and outputs I see. Hence my continuing campaign for better learning. Make sense?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:23am</span>
|
A commenter on last week’s post asked an implicit question that caused me to think. The issue was whether the solutions I was proposing are having the learners be self directed or whether it was ‘push’ learning. And I reckon there’s a bit of both, but I’m fighting for more of a constuctivist approach than the instructivist model.
I’ve argued in the past for a more active learning, and I think the argument for pure instructivism sets up a straw man (Feuerzeig argued for guided discovery back in ’85!). Obviously, I think that pure exploration is doomed to failure, as we know that learners can stay in one small corner of a search space without support (hence the coaching in Quest). However, a completely guided experience doesn’t ‘stick’ as well, either.
Another factor is our target learners. In my experience, more constructivist approaches can be disturbing to learners who have had more instructivist approaches. And the learners we are dealing with haven’t been that successful in school, and typically need a lot of scaffolding.
Yet our goals are fairly pragmatic overall (and in general we should be looking for ways to pragmatic in more of our learning). We’re focused on meaningful skills, so we should leverage this.
In this case, I’m moving the design to more and more "here’s a goal, here’re some resources" type of approach where the goal is to generate a work-related integration (requiring relevant cognitive processing). Even if it’s conceptual material, I want learners to be doing this, and of course the main focus is on real contextualized practice.
I’m pushing a very activity-based pedagogy (and curriculum). Yes, the tasks are designed, but they’re expected to take some responsibility for processing the information to produce outputs. The longer term goal is to increase the challenge and variety as we go through the curriculum, developing learner’s ability to learn to learn and ability to adapt as well. Make sense?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:22am</span>
|
A colleague I greatly respect, who has a track record of high impact in important positions, has been a proponent of service science. And I confess that it hadn’t really penetrated. Yet last week I heard about it in a way that resonated much more strongly and got me thinking, so let me share where it’s leading my thinking, and see what you say.
One time I heard something exciting, a concept called interface ‘explorability‘ when I was doing a summer internship at NASA while a grad student. When I brought it back to the lab, my advisor didn’t really resonate. Then, some time later (a year or two) he was discussing a concept and I mentioned that it sounded a lot like that ‘explorability’, and he suddenly wanted to know more. The point being that there is a time when you’re ready to hear a message. And that’s me with service science.
The concept is considering a mutual value generation process between provider and customer, and engineering it across the necessary system components and modular integrations to yield a successful solution. As organizations need to be more customer-centric, this perspective yields processes to do that in a very manageable, measurable way. And that’s the perspective I’d been missing when I’d previously heard about it, but Hastings & Saperstein presented it last week at the Future of Talent event in the form of Service Thinking, which brought the concept home.
I wondered how it compared to Design Thinking, another concept sweeping instructional design and related fields, and it appears to be synergistic but perhaps a superset. While nothing precludes Design Thinking from producing the type of outcome Service Thinking is advocating, I’m inferring that Service Thinking is a bit more systematic and higher level.
The interesting idea for me was to think of bringing Service Thinking to the role of L&D in the organization. If we’re looking systematically at how we can bring value to the customer, in this case the organization, systematically, we have a chance to look at the bigger picture, the Performance & Development view instead of the training view. If we take the perspective of an integrated approach to meeting organizational execution and innovation needs, we may naturally develop the performance ecosystem.
We need to take a more comprehensive approach, where we integrate technology capabilities, resources, and people into an integrated whole. I’m looking at service thinking, as perhaps an integration of the rigor of systems thinking with the creative customer focus of design thinking, as at least another way to get us there. Thoughts?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:22am</span>
|
I talked yesterday about how some concepts may not resonate immediately, and need to continue to be raised until the context is right. There I was talking about explorability and my own experience with service science, but it occurred to me that the same may be true of games.
Now, I’ve been pushing games as a vehicle for learning for a long time, well before my book came out on the topic. I strongly believe that next to mentored live practice (which doesn’t scale well), (serious) games are the next best learning opportunity. The reasons are strong:
safe practice: learners can make mistakes without real consequences (tho’ world-based ones can play out)
contextualized practice (and feedback): learning works better in context rather than on abstract problems
sufficient practice: a game engine can give essentially infinite replay
adaptive practice: the game can get more difficult to develop the learner to the necessary level
meaningful practice: we can choose the world and story to be relevant and interesting to learners
the list goes on. Pretty much all the principles of the Serious eLearning Manifesto are addressed in games.
Now, I and others (Gee, Aldrich, Shaffer, again the list goes on) have touted this for years. Yet we haven’t seen as much progress as we could and should. It seemed like there was a resurgence around 2009-2010, but then it seemed to go quiet again. And now, with Karl Kapp’s Gamification book and the rise of interest in gamification, we have yet another wave of interest.
Now, I’m not a fan of the extrinsic gamification, but it appears there’s a growing awareness of the difference between extrinsic and intrinsic. And I’m seeing more use of games to develop understanding in at least K12 circles. Hopefully, the awareness will arise in higher ed and corp too.
As some fear, it’s too costly, but my response is twofold:
games aren’t as expensive as you fear; there are lots of opportunities for games in lower price ranges (e.g. $100K), don’t buy into the $1M and up mentality
they’re actually likely to be effective (as part of a complete learning experience), compared to many if not most of the things being done in learning
So I hope we might finally go beyond Clicky Clicky Bling Bling, (tarted quiz shows, cheesy videos and more) and get to interaction that actually leads to change. Here’s hoping!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:22am</span>
|
In an previous post, I argued for different types and ratios for worthwhile learning activities. I’ve been thinking about this (and working on it) quite a bit lately. I know there are other resources that I should know about (pointers welcome), but I’m currently wrestling with several types of situations and wanted to share my thinking. This is aside from scenarios/simulations (e.g. games) that are the first, best, learning practice you can engage in, of course. What I’m looking for is ways to get learners to do processing in ways that will assist their ability to do. This isn’t recitation, but application.
So one situation is where the learner has to execute the right procedure. This seems easy, but the problem is that they’re liable to get it right in practice. The problem is that they still can get it wrong when in real situations. An idea I had heard of before, but was reiterated through Socratic Arts (Roger Schank & cohorts) was to have learners observe (e.g. video) of someone performing it and identifying whether it was right or not. This is a more challenging task than just doing it right for many routine but important tasks (e.g. sanitation). It has learners monitor the process, and then they can turn that on themselves to become self-monitoring. If the selection of mistakes is broad enough, they’ll have experience that will transfer to their whole performance.
Another task that I faced earlier was the situation where people had to interpret guidelines to make a decision. Typically, the extreme cases are obvious, and instructors argue that they all are, but in reality there are many ambiguous situations. Here, as I’ve argued before, the thing to do is have folks work in groups and be presented with increasingly ambiguous situations. What emerges from the discussion is usually a rich unpacking of the elements. This processing of the rules in context exposes the underlying issues in important ways.
Another type of task is helping people understand applying models to make decisions. Rather than present them with the models, I’m again looking for more meaningful processing. Eventually I’ll expect learners to make decisions with them, but as a scaffolding step, I’m asking them to interpret the models in terms of their recommendations for use. So before I have them engage in scenarios, I’ll ask them to use the models to create, say, a guide to how to use that information. To diagnose, to remedy, to put in place initial protections. At other times, I’ll have them derive subsequent processes from the theoretical model.
One other example I recall came from a paper that Tom Reeves wrote (and I can’t find) where he had learners pick from a number of options that indicated problems or actions to take. The interesting difference was then there was a followup question about why. Every choice was two stages: decision and then rationale. This is a very clever way to see if they’re not just getting the right answer but can understand why it’s right. I wonder if any of the authoring tools on the market right now include such a template!
I know there are more categories of learning and associated tasks that require useful processing (towards do, not know, mind you ;), but here are a couple that are ‘top of mind’ right now. Thoughts?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:22am</span>
|
In a (rare) fit of tidying, I was moving from one note-taking app to another, and found a diagram I’d jotted, and it rekindled my thinking. The point was characterizing social media in terms of their particular mechanisms of distribution. I can’t fully recall what prompted the attempt at characterization, but one result of revisiting was thinking about the media in terms of whether they’re part of a natural mechanism of ‘show your work’ (ala Bozarth)/’work out loud’ (ala Jarche).
The question revolves around whether the media are point or broadcast, that is whether you specify particular recipients (even in a mailing or group list), or whether it’s ‘out there’ for anyone to access. Now, there are distinctions, so you can have restricted access on the ‘broadcast’ mode, but in principle there’re two different mechanisms at work.
It should be noted that in the ‘broadcast’ model, not everyone may be aware that there’s a new message, if they’re not ‘following’ the poster of the message, but it should be findable by search if not directly. Also, the broadcast may only be an organizational network, or it can be the entire internet. Regardless, there are differences between the two mechanisms.
So, for example, a chat tool typically lets you ping a particular person, or a set list. On the other hand, a microblog lets anyone decide to ‘follow’ your quick posts. Not everyone will necessarily be paying attention to the ‘broadcast’, but they could. Typically, microblogs (and chat) are for short messages, such as requests for help or pointers to something interesting. The limitations mean that more lengthy discussions typically are conveyed via…
Formats supporting unlimited text, including thoughtful reflections, updates on thinking, and more tend to be conveyed via email or blog posts. Again, email is addressed to a specific list of people, directly or via a mail list, openly or perhaps some folks receiving copies ‘blind’ (that is, not all know who all is receiving the message. A blog post (like this), on the other hand, is open for anyone on the ‘system’.
The same holds true for other media files besides text. Video and audio can be hidden in a particular place (e.g. a course) or sent directly to one person. On the other hand, such a message can be hosted on a portal (YouTube, iTunes) where anyone can see. The dialog around a file provides a rich augmentation, just as such can be happening on a blog, or edited RTs of a microblog comment.
Finally, a slightly different twist is shown with documents. Edited documents (e.g. papers, presentations, spreadsheets) can be created and sent, but there’s little opportunity for cooperative development. Creating these in a richer way that allows for others to contribute requires a collaborative document (once known as a wiki). One of my dreams is that we may have collaboratively developed interactives as well, though that still seems some way off.
The point for showing out loud is that point is only a way to get specific feedback, whereas a broadcast mechanism is really about the opportunity to get a more broad awareness and, potentially, feedback. This leads to a broader shared understanding and continual improvement, two goals critical to organizational improvement.
Let me be the first to say that this isn’t necessarily an important, or even new, distinction, it’s just me practicing what I preach. Also, I recognize that the collaborative documents are fundamentally different, and I need to have a more differentiated way to look at these (pointers or ideas, anyone), but here’s my interim thinking. What say you?
#itashare
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:21am</span>
|
In preparation for a presentation, I was reviewing my mobile models. You may recall I started with my 4C‘s model (Content, Compute, Communicate, & Capture), and have mapped that further onto Augmenting Formal, Performance Support, Social, & Contextual. I’ve refined it as well, separating out contextual and social as different ways of looking at formal and performance support. And, of course, I’ve elaborated it again, and wonder whether you think this more detailed conceptualization makes sense.
So, my starting point was realizing that it wasn’t just content. That is, there’s a difference between compute and content where the interactivity was an important part of the 4C’s, so that the characteristics in the content box weren’t discriminated enough. So the new two initial sections are mlearning content and mlearning compute, by self or social. So, we can be getting things for an individual, or it can be something that’s socially generated or socially enabled.
The point is that content is prepared media, whether text, audio, or video. It can be delivered or accessed as needed. Compute, interactive capability, is harder, but potentially more valuable. Here, an individual might actively practice, have mixed initiative dialogs, or even work with others or tools to develop an outcome or update some existing shared resources.
Things get more complex when we go beyond these elements. So I had capture as one thing, and I’m beginning to think it’s two: one is the capture of current context and keeping sharing that for various purposes, and the other is the system using that context to do something unique.
To be clear here, capture is where you use the text insertion, microphone, or camera to catch unique contextual data (or user input). It could also be other such data, such as a location, time, barometric pressure, temperature, or more. This data, then, is available to review, reflect on, or more. It can be combinations, of course, e.g. a picture at this time and this location.
Now, if the system uses this information to do something different than under other circumstances, we’re contextualizing what we do. Whether it’s because of when you are, providing specific information, or where you are, using location characteristics, this is likely to be the most valuable opportunity. Here I’m thinking alternate reality games or augmented reality (whether it’s voiceover, visual overlays, what have you).
And I think this is device independent, e.g. it could apply to watches or glasses or..as well as phones and tablets. It means my 4 C’s become: content, compute, capture, and contextualize. To ponder.
So, this is a more nuanced look at the mobile opportunities, and certainly more complex as well. Does the greater detail provide greater benefit?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:21am</span>
|
As usual, I will be at DevLearn (in Las Vegas) this next week, and welcome meeting up with you there. There is a lot going on. Here’re the things I’m involved in:
On Tuesday, I’m running an all day workshop on eLearning Strategy. (Hint: it’s really a Revolutionize L&D workshop ;). I’m pleasantly surprised at how many folks will be there!
On Wednesday at 1:15 (right after lunch), I’ll be speaking on the design approach I’m leading at the Wadhwani Foundation, where we’re trying to integrate learning science with pragmatic execution. It’s at least partly a Serious eLearning Manifesto session.
On Wednesday at 2:45, I’ll be part of a panel on mlearning with my fellow mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell, chaired by conference program director David Kelly.
Of course, there’s much more. A few things I’m looking forward to:
The keynotes:
Neil DeGrasse Tyson, a fave for his witty support of science
Beau Lotto talking about perception
Belinda Parmar talking about women in tech (a burning issue right now)
DemoFest, all the great examples people are bringing
and, of course, the networking opportunities
DevLearn is probably my favorite conference of the year: learning focused, technologically advanced, well organized, and with the right people. If you can’t make it this year, you might want to put it on your calendar for another!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:21am</span>
|
While our cognitive architecture has incredible capabilities (how else could we come up with advances such as Mystery Science Theater 3000?), it also has limitations. The same adaptive capabilities that let us cope with information overload in both familiar and new ways also lead to some systematic flaws. And it led me to think about the ways in which we support these limitations, as they have implications for designing solutions for our organizations.
The first limit is at the sensory level. Our mind actually processes pretty much all the visual and auditory sensory data that arrives, but it disappears pretty quickly (within milliseconds) except for what we attend to. Basically, your brain fills in the rest (which leaves open the opportunity to make mistakes). What do we do? We’ve created tools that allow us to capture things accurately: cameras and microphones with audio recording. This allows us to capture the context exactly, not as our memory reconstructs it.
A second limitation is our ‘working’ memory. We can’t hold too much in mind at one time. We ‘chunk’ information together as we learn it, and can then hold more total information at one time. Also, the format of working memory largely is ‘verbal’. Consequently, using tools like diagramming, outlines, or mindmaps add structure to our knowledge and support our ability to work on it.
Another limitation to our working memory is that it doesn’t support complex calculations, with many intermediate steps. Consequently we need ways to deal with this. External representations (as above), such as recording intermediate steps, works, but we can also build tools that offload that process, such as calculators. Wizards, or interactive dialog tools, are another form of a calculator.
Processing information in short term memory can lead to it being retained in long term memory. Here the storage is almost unlimited in time and scope, but it is hard to get in there, and isn’t remembered exactly, but instead by meaning. Consequently, models are a better learning strategy than rote learning. But external sources like the ability to look up or search for information is far better than trying to get it in the head.
Similarly, external support for when we do have to do things by rote is a good idea. So, support for process is useful and the reason why checklists have been a ubiquitous and useful way to get more accurate execution.
In execution, we have a few flaws too. We’re heavily biased to solve new problems in the ways we’ve solved previous problems (even if that’s not the best approach. We’re also likely to use tools in familiar ways and miss new ways to use tools to solve problems. There are ways to prompt lateral thinking at appropriate times, and we can both make access to such support available, and even trigger same if we’ve contextual clues.
We’re also biased to prematurely converge on an answer (intuition) rather than seek to challenge our findings. Access to data and support for capturing and invoking alternative ways of thinking are more likely to prevent such mistakes.
Overall, our use of more formal logical thinking fatigues quickly. Scaffolding help like the above decreases the likelihood of a mistake and increases the likelihood of an optimal outcome.
When you look at performance gaps, you should look to such approaches first, and look to putting information in the head last. This more closely aligns our support efforts with how our brains really think, work, and learn. This isn’t a complete list, I’m sure, but it’s a useful beginning.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:20am</span>
|
Neil deGrasse Tyson opened this year’s DevLearn conference. A clear crowd favorite, folks lined up to get in (despite the huge room). In a engaging, funny, and poignant talk, he made a great case for science and learning.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:20am</span>
|
Beau Lotto gave a very interesting keynote that built from perceptual phenomena to a lovely message on learning.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:20am</span>
|
Belinda Parmar addressed the critical question of women in tech in a poignant way, pointing out that the small stuff is important: language, imagery, context. She concluded with small actions including new job description language and better female involvement in product development.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:20am</span>
|
This past week I was at the always great DevLearn conference, the biggest and arguably best yet. There were some hiccups in my attendance, as several blocks of time were taken up with various commitments both work and personal, so for instance I didn’t really get a chance to peruse the expo at all. Yet I attended keynotes and sessions, as well as presenting, and hobnobbed with folks both familiar and new.
The keynotes were arguably even better than before, and a high bar had already been set.
Neil deGrasse Tyson was eloquent and passionate about the need for science and the lack of match between school and life. I had a quibble about his statement that doing math teaches problem-solving, as it takes the right type of problems (and Common Core is a step in the right direction) and it takes explicit scaffolding. Still, his message was powerful and well-communicated. He also made an unexpected connection between Women’s Liberation and the decline of school quality that I hadn’t considered.
Beau Lotto also spoke, linking how our past experience alters our perception to necessary changes in learning. While I was familiar with the beginning point of perception (a fundamental part of cognitive science, my doctoral field), he took it in very interesting and useful direction in an engaging and inspiring way. His take-home message: teach not how to see but how to look, was succinct and apt.
Finally, Belinda Parmar took on the challenge of women in technology, and documented how small changes can make a big difference. Given the madness of #gamergate, the discussion was a useful reminder of inequity in many fields and for many. She left lots of time to have a meaningful discussion about the issues, a nice touch.
Owing to the commitments both personal and speaking, I didn’t get to see many sessions. I had the usual situation of good ones, and a not-so-good one (though I admit my criteria is kind of high). I like that the Guild balances known speakers and topics with taking some chances on both. I also note that most of the known speakers are those folks I respect that continue to think ahead and bring new perspectives, even if in a track representing their work. As a consequence, the overall quality is always very high.
And the associated events continue to improve. The DemoFest was almost too big this year, so many examples that it’s hard to start looking at them as you want to be fair and see all but it’s just too monumental. Of course, the Guild had a guide that grouped them, so you could drill down into the ones you wanted to see. The expo reception was a success as well, and the various snack breaks suited the opportunity to mingle. I kept missing the ice cream, but perhaps that’s for the best.
I was pleased to have the biggest turnout yet for a workshop, and take the interest in elearning strategy as an indicator that the revolution is taking hold. The attendees were faced with the breadth of things to consider across advanced ID, performance support, eCommunity, backend integration, decoupled delivery, and then were led through the process of identifying elements and steps in the strategy. The informal feedback was that, while daunted by the scope, they were excited by the potential and recognizing the need to begin. The fact that the Guild is holding the Learning Ecosystem conference and their release of a new and quite good white paper by Marc Rosenberg and Steve Foreman are further evidence that awareness is growing. Marc and Steve carve up the world a little differently than I do, but we say similar things about what’s important.
I am also pleased that Mobile interest continues to grow, as evidenced by the large audience at our mobile panel, where I was joined by other mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell. They provide nicely differing viewpoints, with Sarah representing the irreverent designer, Robert the pragmatic systems perspective, and Chad the advanced technology view, to complement my more conceptual approach. We largely agree, but represent different ways of communicating and thinking about the topic. (Sarah and I will be joined by Nick Floro for ATD’s mLearnNow event in New Orleans next week).
I also talked about trying to change the pedagogy of elearning in the Wadhwani Foundation, the approach we’re taking and the challenges we face. The goal I’m involved in is job skilling, and consequently there’s a real need and a real opportunity. What I’m fighting for is to make meaningful practice as a way to achieve real outcomes. We have some positive steps and some missteps, but I think we have the chance to have a real impact. It’s a work in progress, and fingers crossed.
So what did I learn? The good news is that the audience is getting smarter, wanting more depth in their approaches and breadth in what they address. The bad news appears to be that the view of ‘information dump & knowledge test = learning’ is still all too prevalent. We’re making progress, but too slowly (ok, so perhaps patience isn’t my strong suit ;). If you haven’t, please do check out the Serious eLearning Manifesto to get some guidance about what I’m talking about (with my colleagues Michael Allen, Julie Dirksen, and Will Thalheimer). And now there’s an app for that!
If you want to get your mind around the forefront of learning technology, at least in the organizational space, DevLearn is the place to be.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:19am</span>
|
A colleague pointed me to this article that posited the benefits of digital note-taking. While I agree, I want to take it further. There are some non-0bvious factors in note taking.
As the article points out, there are numerous benefits possible by taking notes digitally. They can be saved and reviewed, have text and/or sketches and/or images (even video too), be shared, revised, elaborated with audio both to add to notes and to read back the prose, and more. Auto-correct is also valuable. And I absolutely believe all this is valuable. But there’s more.
One thing the article touched on is the value of structure. Whether outlining, where indents capture relationships, or networks similarly, capturing that structure means valuable processing by the note-taker. Interestingly, graphical frameworks can support cycles or cross references in the structure better than outlines can (I once was called out that there was no additional value to mindmaps over outlines, and this is one area where they are superior).
However, as the article noted, research has shown that taking verbatim notes doesn’t help. You have to actively reprocess the information, extracting structure through outlines or networks, and paraphrasing what you hear instead of parroting it. This is the real value of note taking. You need to be actively engaged.
Note-taking also helps keep that engagement. The mindmaps that I frequently post started as a way for me to listen better. My brain can be somewhat lateral (an understatement; a benefit for Quinnovating, but a problem for listening to presentations), and if someone says something interesting, by the time I’ve explored the thought and returned, I’ve lost the plot. Mindmapping was a way to occupy enough extra cognitive overhead to keep my mind from sparking off. It just so happens that when I posted one, it drew significant interest (read: hits), and so I’ve continued it for me, the audience, and the events.
Interestingly, the benefit of the note taking can persist even if the notes aren’t reviewed; the act of note-taking with the extra processing in paraphrasing is valuable in itself. I once asked an audience how many took notes, and many hands went up. I then asked how many read the notes afterwards, and the result was significantly less. Yet that’s not a bad thing!
So, take notes that reprocess the information presented. Then, review them if useful. But give yourself the benefit of the processing, if nothing else.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:19am</span>
|
While I loved his presentation, his advocacy for science, and his style, I had a problem with one thing Neil deGrasse Tyson said during his talk. Now, he’s working on getting deeper into learning, but this wasn’t off the cuff, this was his presentation (and he says he doesn’t say things publicly until he’s ready). So while it may be that he skipped the details, I can’t. (He’s an astrophysicist, I’m the cognitive engineer ;)
His statement, as I recall and mapped, said that math wires brains to solve problems. And yes, with two caveats. There’s an old canard that they used to teach Latin because it taught you how to think, and it actually didn’t work that way. The ability to learn Latin taught you Latin, but not how to think or learn, unless something else happened. Having Latin isn’t a bad thing, but it’s not obviously a part of a modern curriculum.
Similarly, doing math problems isn’t necessarily going to teach you how to do more general problem-solving. Particularly doing the type of abstract math problems that are the basis of No Child Left Untested, er Behind. What you’ll learn is how to do abstract math problems, which isn’t part of most job descriptions these days. Now, if you want to learn to solve meaningful math problems, you have to be given meaningful math problems, as the late David Jonassen told us. And the feedback has to include the problem-solving process, not just the math!
Moreover, if you want to generalize to other problem-solving, like science or engineering, you need explicit scaffolding to reflect on the process and the generality across domains. So you need some problem-solving in other domains to abstract and generalize across. Otherwise, you’ll get good at solving real world math problems, which is necessary but not sufficient. I remember my child’s 2nd grade teacher who was talking about the process they emphasized for writing - draft, get feedback, review, refine - and I pointed out that was good for other domains as well: math, drawing, etc. I saw the light go on. And that’s the point, generalizing is valuable in learning, and facilitating that generalization is valuable in teaching.
I laud the efforts to help folks understand why math and science are important, but you can’t let people go away thinking that doing abstract math problems is a valuable activity. Let’s get the details right, and really accelerate our outcomes.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:19am</span>
|
This week is Working Out Loud week, and I can’t but come out in support of a principle that I think is going to be key to organizational success. And, I think, L&D has a key role to play.
The benefits from working out loud are many. Personally, documenting what you’re doing serves as a reminder to yourself and awareness for others. The real power comes, however, from taking that next level: documenting not just what you’re doing, but why. This helps you in reflecting on your own work, and being clear in your thinking. Moreover, sharing your thinking gives you a second benefit in getting others’ input which can really improve the outcome.
In addition, it gives others a couple of benefits. They get to know what you’re up to, so it’s easier to align, but if your thinking is any good, it gives them the chance to learn from how you think.
So what is the role of L&D here? I’ll suggest there are two major roles: facilitating the skills and enabling the culture.
First, don’t assume folks know what working out loud means. And even if they do, they may not be good at it in terms of knowing how to indicate the underlying thinking. And they likely will want feedback and encouragement. First, L&D needs to model it, practicing what they preach. They need to make sure the tools are easily available and awareness is shared. Execs need to be shown the benefit and encouraged to model the behavior too. And L&D will have to trumpet the benefits, accomplishments, and encourage the behavior.
None of this is really likely to succeed if you don’t have a supportive culture. In a Miranda organization, no one is going to share. Instead, you need the elements of a learning organization: the environment has to value diversity, be open to new ideas, provide time for reflection, and most of all be safe. And L&D has to understand the benefits and continue to promote them, identify problems, and work to resolve them.
Note that this is not something you manage or control. The attitude here has to be one of nourishing aka (seed, feed, and weed). You may track it, and you want to be looking for things to support or behaviors to improve, but the goal is to develop a vibrant community of sharing, not squelching anything that violates the hierarchy.
Working out loud benefits the individual and the organization in a healthy environment. Getting the environment right, and facilitating the practice, are valuable contributions, and ones that L&D can, and should, contribute to.
#itashare
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:19am</span>
|
I’ve had the pleasure last week of keynoting Charles Sturt University’s annual Education conference. They’re in the process of rethinking what their learning experience should be, and I talked about the changes we’re trying to make at the Wadhwani Foundation.
I was reminded of previous conversations about learning experience design and the transformative experience. And I have argued in the past that what would make an optimal value proposition (yes, I used that phrase) in a learning market would be to offer a transformative learning experience. Note that this is not just about the formal learning experience, but has two additional components.
Now, it does start with a killer learning experience. That is, activity-based, competency-driven, model-guided, with lean and compelling content. Learners need role-plays and simulations to be immersed in practice, and scaffolded with reflection to develop their flexible ability to apply these abilities going forward. But wait, there’s more!
As a complement, there needs to be a focus on developing the learner as well as their skills. That is, layering on the 21st Century skills: the ability to communicate, lead, problem-solve, analyze, learn, and more. These need to be included and developed across the learning experience. So learners not only get the skills they need to succeed now, but to adapt as things change.
The third element is to be a partner in their success. That is, don’t give them a chance to sink or swim on the basis of the content, but to look for ways in which learners might be struggling with other issues, and work hard to ensure they succeed.
I reckon that anyone capable of developing and delivering on this model provides a model that others can only emulate, not improve upon. We’re working on the first two initially at the Foundation, and hopefully we’ll get to the latter soon. But I reckon it’d be great if this were the model all were aspiring to. Here’s hoping!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:18am</span>
|
In trying to shift from a traditional elearning approach to a more enlightened one, a deeper one, you are really talking about viewing things differently, which is non-trivial. And then, even if you know you want to do better, you still need some associated skills. Take, for example, models.
I’ve argued before that models are a better basis for action, for making better decisions. Arbitrary knowledge is hard to recollect, and consequently brittle. We need a coherent foundation upon which to base foundations, and arbitrary information doesn’t help. If I see a ‘click to learn more’, for instance, I have good clue that someone’s presenting arbitrary information. However, as I concluded in the models article, "It’s not always there, nor even easily inferable." Which is a problem that I’ve been wrestling with. So here’re my interim thoughts.
Others have counseled that not just any Subject Matter Expert (SME) will do. They may be able to teach material with their stories and experience, and they can certainly do the work, but they may not have a conscious model that’s available to guide novices. So I’ve head that you have to find one capable. If you don’t, and you don’t have good source material, you’re going to have to do the work yourself. You might be able to find one in a helpful place like Wikipedia (and please join us in donating to help keep it going, would you please?), but otherwise you’re going to have to do the hard yards.
Say you’re wrestling with a list of things, like attacks on networks, or impacts on blood pressure. There is a laundry list of them, and there may seem to be no central order. So what do you do? Well, in these cases where I don’t have one, I make one.
For instance, in attacks on networks, it seems that the inherent structure of the network provides an overarching framework for vulnerabilities. Networks can be attacked digitally through password cracking or software vulnerabilities. The data streams could also be hacked either physically connecting to wires or intercepting wireless signals. Socially, you can trick people into doing wrong things too. Similarly with blood pressure, the nature of the system tells us that constricted or less flexible vessels (e.g. from aging) will increase blood pressure. Decreased volume in the system will decrease, etc.
The point is, I’m using the inherent structure to provide a framework that wasn’t given. Is it more than the minimum? Yes. But I’ll argue that if you want the information to be available when necessary, or rather that learners will be able to make the right decisions, this is the most valuable thing you can do. And it might take less effort overall, as you can teach the model and support making good inferences more efficiently than teaching all the use cases.
And is this a sufficient approach? I can’t say that; I haven’t spent enough time on other content. So at this point treat it like a heuristic. However, it gives you something you can at least take to a SME and have them critique and improve it (which is easier than trying to extract a model whole-cloth ;).
Now there might also be the case that there just isn’t an organizing principle (I’m willing to concede that, for now…). Then, you may need simply to ask your learners to do some meaningful processing on the material. Look, if you’re presenting it, then you’re expecting them to remember it. Presenting arbitrary information isn’t going to do that. If they need to remember it, have them process it. Otherwise, why present it at all?
Now, this is only necessary when you’re trying to do formal learning; it might be that you don’t have to get it in folks heads and can put it in the world. Do it if you can. But I believe that what will make a bigger difference for learners, for performers, will be the ability to make better decisions. And, in our increasingly turbulent times that will come from models, not rote information. So please, if you’re doing formal learning, do it right, and get the models you need. Beg, borrow, steal, or make, but get them. Please?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:18am</span>
|
The eLearning Guild, in queuing up interest in their Learning Solutions/Performance Ecosystem conference, asked for some thoughts on the role of technology and training. And, of course, I obliged. You can see them here.
In short, I said that technology can augment what we already do, serving to fill in gaps between what we desired and what we could deliver, and it also gave us some transformative capabilities. That is, we can make the face to face time more effective, extend the learning beyond the classroom, and move the classroom beyond the physical space.
The real key, a theme I find myself thumping more and more often, is that we can’t use technology in ineffective ways. We need to use technology in ways that align with how we think, work, and learn. And that’s all too rare. We can do amazing things, if: we muster the will and resources, do the due diligence on what would be a principled approach, and then do the cycles of develop and iteration to get us to where the solution is working as it should.
Again, the full thoughts can be found on their blog.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:18am</span>
|
I’ve been working on moving a team to deeper learning design. The goal is to practice what I preach, and make sure that the learning design is competency-aligned, activity-based, and model-driven. Yet, doing it in a pragmatic way.
And this hasn’t been without it’s challenges. I presented to the team my vision, we worked out a process, and started coaching the team during development. In retrospect, this wasn’t proactive enough. There were a few other hiccups.
We’re currently engaged in a much tighter cycle of development and revision, and now feel we’re getting close to the level of effectiveness and engagement we need. Whether a) it’s really better, and b) whether we can replicate it yet scale it as well is an open question.
At core are a few elements. For one, a rabid focus on what learners are doing is key. What do they need to be able to do, and what contexts do they need to do it in?
The competency-alignment focus is on the key tasks that they have to do in the workplace, and making sure we’re preparing them across pre-class, in-class, and post-class activities to develop that ability. A key focus is having them make the decision in the learning experience that they’ll have to make afterward.
I’m also pushing very hard on making sure that there are models behind the decisions. I’m trying hard to avoid arbitrary categorizations, and find the principles that drove those categorizations.
Note that all this is not easy. Getting the models is hard when the resources provided don’t include that information. Avoiding presenting just knowledge and definitions is hard work. The tools we use make certain interactions easy, and other ones not so easy. We have to map meaningful decisions into what the tools support. We end up making tradeoffs, as do we all. It’s good, but not as good as it could be. We’ll get better, but we do want to run in a practical fashion as well.
There are more elements to weave in: layering on some general biz skills is embryonic. Our use of examples needs to get more systematic. As does our alignment of learning goal to practice activity. And we’re struggling to have a slightly less didactic and earnest tone; I haven’t worked hard enough on pushing a bit of humor in, tho’ we are ramping up some exaggeration. There’s only so much you can focus on at one time.
We’ll be running some student tests next week before presenting to the founder. Feeling mildly confident that we’ve gotten a decent take on quality learning design with suitable production value, but there is the barrier that the nuances of learning design are subtle. Fingers crossed.
I still believe that, with practice, this becomes habit and easier. We’ll see.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:18am</span>
|
One of the concerns I hear is whether L&D still has a role. The litany is that they’re so far out of touch with their organization, and science, that it’s probably better to let them die an unnatural death than to try to save them. The prevailing attitude of this extreme view is that the Enterprise Social Network is the natural successor to the LMS, and it’s going to come from operations or IT rather than L&D. And, given that I’m on record suggesting that we revolutionize L&D rather than ignoring it, it makes sense to justify why. And while I’ve had other arguments, a really good argument comes from my thesis advisor, Don Norman.
Don’s on a new mission, something he calls DesignX, which is scaling up design processes to deal with "complex socio-technological systems". And he recently wrote an article about why DesignX that put out a good case why L&D as well. Before I get there, however, I want to point out two other facets of his argument.
The first is that often design has to go beyond science. That is, while you use science when you can, when you can’t you use theory inferences, intuition, and more to fill in the gaps, which you hope you’ll find out later (based upon later science, or your own data) was the right choice. I’ve often had to do this in my designs, where, for instance, I think research hasn’t gone quite far enough in understanding engagement. I’m not in a research position as of now, so I can’t do the research myself, but I continue to look at what can be useful. And this is true of moving L&D forward. While we have some good directions and examples, we’re still ahead of documented research. He points out that system science and service thinking are science based, but suggests design needs to come in beyond those approaches. To the extent L&D can, it should draw from science, but also theory and keep moving forward regardless.
His other important point is, to me, that he is talking about systems. He points out that design as a craft works well on simple areas, but where he wants to scale design is to the level of systemic solutions. A noble goal, and here too I think this is an approach L&D needs to consider as well. We have to go beyond point solutions - training, job aids, etc - to performance ecosystems, and this won’t come without a different mindset.
Perhaps the most interesting one, the one that triggered this post, however, was a point on why designers are needed. His point is that others have focuses on efficiency and effectiveness, but he argued that designers have empathy for the users as well. And I think this is really important. As I used to say the budding software engineers I was teaching interface design to: "don’t trust your intuition, you don’t think like normal people". And similarly, the reason I want L&D in the equation is that they (should) be the ones who really understand how we think, work, and learn, and consequently they should be the ones facilitating performance and development. It takes an empathy with users to facilitate them through change, to help them deal with fears and anxieties dealing with new systems, to understand what a good learning culture is and help foster it.
Who else would you want to be guiding an organization in achieving effectiveness in a humane way? So Don’s provided, to me, a good point on why we might still want L&D (well, P&D really ;) in the organization. Well, as long as they also addressing the bigger picture and not just pushing info dump and knowledge test. Does this make sense to you?
#itashare #revolutionizelnd
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:17am</span>
|
A few months back, the esteemed Dr. Will Thalheimer encouraged me to join him in a blog dialog, and we posted the first one on who L&D had responsibility to. And while we took the content seriously, I can’t say our approach was similarly. We decided to continue, and here’s the second in the series, this time trying to look at what might be hindering the opportunity for design to get better. And again, a serious convo leavened with a somewhat demented touch:
Clark:
Will, we’ve suffered Fear and Loathing on the Exhibition Floor at the state of the elearning industry before, but I think it’s worth looking at some causes and maybe even some remedies. What is the root cause of our suffering? I’ll suggest it’s not massive consumption of heinous chemicals, but instead think that we might want to look to our tools and methods.
For instance, rapid elearning tools make it easy to take PPTs and PDFs, add a quiz, and toss the resulting knowledge test and dump over to the LMS to lead to no impact on the organization. Oh, the horror! On the other hand, processes like ADDIE make it easy to take a waterfall approach to elearning, mistakenly trusting that ‘if you include the elements, it is good’ without understanding the nuances of what makes the elements work. Where do you see the devil in the details?
Will:
Clark my friend, you ask tough questions! This one gives me Panic, creeping up my spine like the first rising vibes of an acid frenzy. First, just to be precise—because that’s what us research pedants do—if this fear and loathing stayed in Vegas, it might be okay, but as we’ve commiserated before, it’s also in Orlando, San Francisco, Chicago, Boston, San Antonio, Alexandria, and Saratoga Springs. What are the causes of our debauchery? I once made a list—all the leverage points that prompt us to do what we do in the workplace learning-and-performance field.
First, before I harp on the points of darkness, let me twist my head 360 and defend ADDIE. To me, ADDIE is just a project-management tool. It’s an empty baseball dugout. We can add high-schoolers, Poughkeepsie State freshman, or the 2014 Red Sox and we’d create terrible results. Alternatively, we could add World-Series champions to the dugout and create something beautiful and effective. Yes, we often use ADDIE stupidly, as a linear checklist, without truly doing good E-valuation, without really insisting on effectiveness, but this recklessness, I don’t think, is hardwired into the ADDIE framework—except maybe the linear, non-iterative connotation that only a minor-leaguer would value. I’m open to being wrong—iterate me!
Clark:
Your defense of ADDIE is admirable, but is the fact that it’s misused perhaps reason enough to dismiss it? If your tool makes it easy to lead you astray, like the alluring temptation of a forgetful haze, is it perhaps better to toss it in a bowl and torch it rather than fight it? Wouldn’t the Successive Approximation Method be a better formulation to guide design?
Certainly the user experience field, which parallels ours in many ways and leads in some, has moved to iterative approaches specifically to help align efforts to demonstrably successful approaches. Similarly, I get ‘the fear’ and worry about our tools. Like the demon rum, the temptations to do what is easy with certain tools may serve as a barrier to a more effective application of the inherent capability. While you can do good things with bad tools (and vice versa), perhaps it’s the garden path we too easily tread and end up on the rocks. Not that I have a clear idea (and no, it’s not the ether) of how tools would be configured to more closely support meaningful processing and application, but it’s arguably a collection worth assembling. Like the bats that have suddenly appeared…
Will:
I’m in complete agreement that we need to avoid models that send the wrong messages. One thing most people don’t understand about human behavior is that we humans are almost all reactive—only proactive in bits and spurts. For this discussion, this has meaning because many of our models, many of our tools, and many of our traditions generate cues that trigger the wrong thinking and the wrong actions in us workplace learning-and-performance professionals. Let’s get ADDIE out of the way so we can talk about these other treacherous triggers. I will stipulate that ADDIE does tend to send the message that instructional design should take a linear, non-iterative approach. But what’s more salient about ADDIE than linearity and non-iteration is that we ought to engage in Analysis, Design, Development, Implementation, and Evaluation. Those aren’t bad messages to send. It’s worth an empirical test to determine whether ADDIE, if well taught, would automatically trigger linear non-iteration. It just might. Yet, even if it did, would the cost of this poor messaging overshadow the benefit of the beneficial ADDIE triggers? It’s a good debate. And I commend those folks—like our comrade Michael Allen—for pointing out the potential for danger with ADDIE. Clark, I’ll let you expound on rapid authoring tools, but I’m sure we’re in agreement there. They seem to push us to think wrongly about instructional design.
Clark:
I spent a lot of time looking at design methods across different areas - software engineering, architecture, industrial design, graphic design, the list goes on - as a way to look for the best in design (just as I’ve looked across engagement disciplines, learning approaches, and more; I can be kinda, er, obsessive). I found that some folks have 3 step models, some 4, some 5. There’s nothing magic about ADDIE as ‘the’ five steps (though having *a* structure is of course a good idea). I also looked at interface design, which has arguably the most alignment with what elearning design is about, and they’ve avoided some serious side effects by focusing on models that put the important elements up front, so they talk about participatory design, and situated design, and iterative design as the focus, not the content of the steps. They have steps, but the focus is on an evaluative design process. I’d argue that’s your empirical design (that or the fumes are getting to me). So I think the way you present the model does influence the implementation. If advertising has moved from fear motivation to aspirational motivation (c.f. Sach’s Winning the Story Wars), so too might we want to focus on the inspirations.
Will:
Yes, let’s get back to tools. Here’s a pet peeve of mine. None of our authoring tools—as far as I can tell—prompt instructional designers to utilize the spacing effect or subscription learning. Indeed, most of them encourage—through subconscious triggering—a learning-as-an-event mindset.
For our readers who haven’t heard of the spacing effect, it is one of the most robust findings in the learning research. It shows that repetitions that are spaced more widely in time support learners in remembering. Subscription learning is the idea that we can provide learners with learning events of very short duration (less than 5 or 10 minutes), and thread those events over time, preferably utilizing the spacing effect.
Do you see the same thing with these tools—that they push us to see learning as a longer-then-necessary bong hit, when tiny puffs might work better?
Clark:
Now we’re into some good stuff! Yes, absolutely; our tools have largely focused on the event model, and made it easy to do simple assessments. Not simple good assessments, just simple ones. It’s as if they think designers don’t know what they need. And, as our colleague Cammy Bean’s book The Accidental Instructional Designer’s success shows, they may be right. Yet I’d rather have a power tool that’s incrementally explorable, but scaffolds good learning than one that ceilings out just when we’re getting to somewhere interesting. Where are the templates for spaced learning, as you aptly point out? Where are the tools to make two-step assessments (first tell us which is right, then why it’s right, as Tom Reeves has pointed us to)? Where are more branching scenario tools? They tend to hover at the top end of some tools, unused. I guess what I’m saying is that the tools aren’t helping us lift our game, and while we shouldn’t blame the tools, tools that pointed the right way would help. And we need it (and a drink!).
Will:
Should we blame the toolmakers then? Or how about blaming ourselves as thought leaders? Perhaps we’ve failed to persuade! Now we’re on to fear and self-loathing…Help me Clark! Or, here’s another idea. How about you and I raise $5 million in venture capital and we’ll build our own tool? Seriously, it’s a sad sign about the state of the workplace learning market that no one has filled the need. Says to me that (1) either the vast cadre of professionals don’t really understand the value, or (2) the capitalists who might fund such a venture don’t think the vast cadre really understand the value, (3) or the vast cadre are so unsuccessful in persuading their own stakeholders that truth about effectiveness doesn’t really matter. When we get our tool built, how about we call it Vastcadre? Help me Clark! Kent you help me Clark? Please get this discussion back on track…What else have you seen that keeps us ineffective?
Clark:
Gotta hand it to Michael Allen, putting his money where his mouth is, and building ZebraZapps. Whether that’s the answer is a topic for another day. Or night. Or… so what else keeps us ineffective? I’ll suggest that we’re focusing on the wrong things. In addition to our design processes, and our tools, we’re not measuring the right things. If we’re focused on how much it costs per bum in seat per hour, we’re missing the point. We should be measuring the impact of our learning. It’s about whether we’re decreasing sales times, increasing sales success, solving problems faster, raising customer satisfaction. If we look at what we’re trying to impact, then we’re going to check to see if our approaches are working, and we’ll get to more effective methods. We’ve got to cut through the haze and smoke (open up what window, sucker, and let some air into this room), and start focusing with heightened awareness on moving some needles.
So there you have it. Should we continue our wayward ways?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:17am</span>
|
For Inside Learning & Technologies 50th edition, a number of us were asked to provide reflections on what has changed over the past 15 years. This was pretty much the period in which I’d returned to the US and took up with what was kind of a startup and led to my life as a consultant. As an end of year piece, I have permission to post that article here:
15 years ago, I had just taken a step away from academia and government-sponsored initiatives to a new position leading a team in what was effectively a startup. I was excited about the prospect of taking the latest learning science to the needs of the corporate world. My thoughts were along the lines of "here, where we have money for meaningful initiatives, surely we can do something spectacular". And it turns out that the answer is both yes and no.
The technology we had then was pretty powerful, and that has only increased in the past 15 years. We had software that let us leverage the power of the internet, and reasonable processing power in our computers. The Palm Pilot had already made mobile a possibility as well. So the technology was no longer a barrier, even then.
And what amazing developments we have seen! The ability to create rendered worlds accessible through a dedicated application and now just a browser is truly an impressive capability. Regardless of whether we overestimated the value proposition, it is still quite the technology feat. And similarly, the ability to communicate via voice and video allows us to connect people in ways once only dreamed of.
We also have rich new ways to interact from microblogs to wikis (collaborative documents). These capabilities are improved by transcending proximity and synchronicity. We can work together without worrying about where the solution is hosted, or where our colleagues are located. Social media allow us to tap into the power of people working together.
The improvements in mobile capabilities are also worth noting. We have gone from hype to hyphens, where a limited monochrome handheld has given way to powerful high-resolution full-color multi-channel always-connected sensor-rich devices. We can pretty much deliver anything anywhere we want, and that fulfills Arthur C. Clarke’s famous proposition that a truly advanced technology is indistinguishable from magic.
Coupled with our technological improvements are advances in our understanding of how we think, work, and learn. We now have recognition about how we act in the world, about how we work with others, and how we best learn. We have information age understandings that illustrate why industrial age methods are not appropriate.
It is not truly new, but reaching mainstream awareness in the last decade and more is the recognition that the model of our thinking as formal and logical is being updated. While we can work in such ways, it is the exception rather than the rule. Such thinking is effortful and it turns out both that we avoid it and there is a limit to how much deep thinking one can do in a day. Instead, we use our intuition beyond where we should, and while this is generally okay, it helps to understand our limitations and design around them.
There is also a spreading awareness of how much our thinking is externalized in the world, and how much we use technology to support us being effective. We have recognized the power of external support for thinking, through tools such as checklists and wizards. We do this pretty naturally, and the benefits from good design of technology greatly facilitate our ability to think.
There is also recognition that the model of individual innovation is broken, and that working together is far superior to working alone. The notion of the lone genius disappearing and coming back with the answer has been replaced by iterations on top of previous work by teams. When people work together in effective ways, in a supportive environment, the outcomes will be better. While this is not easy to effect in many circumstances, we know the practices and culture elements we need, and it is our commitment to get there, not our understanding, that is the barrier.
Finally, our approaches to learning are better informed now. We know that being emotionally engaged is a valued component in moving to learning experience design. We understand the role of models in supporting more flexible performance. We also have evidence of the value of performing in context. It is not news that information dump and knowledge test do not lead to meaningful skill acquisition, and it is increasingly clear that meaningful practice can. It is also increasingly clear that, as things move faster, meaningful skills - the ability to make better decisions - is what is going to provide the sustainable differentiator for organizations.
So imagine my dismay in finding that the approaches we are using in organizations are largely still rooted in approaches from yesteryear. While we have had rich technology opportunities to combine with our enlightened understanding, that is not what we are seeing. What we see is still expectations that it is done in-the-head, top-down, with information dump and meaningless assessment that is not tied to organizational outcomes. And while it is not working, demonstrably, there seems little impetus to change.
Truly, there has been little change in our underlying models in 15 years. While the technology is flashier, the buzz words have mutated, and some of the faces have changed, we are still following myths like learning styles and generational differences, we are still using ‘spray and pray’ methods in learning, we are still not taking on performance support and social learning, and perhaps most distressingly, we are still not measuring what matters.
Sure, the reasons are complex. There are lots of examples of the old approaches, the tools and practices are aligned with bad learning practices, the shared metrics reflect efficiency instead of effectiveness, … the list goes on. Yet a learning & development (L&D) unit unengaged with the business units it supports is not sustainable, and consequently the lack of change is unjustifiable.
And the need is now more than ever. The rate of change is increasing, and organizations now have more need to not just be effective, but they have to become agile. There is no longer time to plan, prepare, and execute, the need is to continually adapt. Organizations need to learn faster than the competition.
The opportunities are big. The critical component for organizations to thrive is to couple optimal execution (the result of training and performance support) with continual innovation (which does not come from training). Instead, imagine an L&D unit that is working with business units to drive interventions that affect key KPIs. Consider an L&D unit that is responsible for facilitating the interactions that are leading to new solutions, new products and services, and better relationships with customers. That is the L&D we need to see!
The path forward is not easy but it is systematic and doable. A vision of a ‘performance ecosystem’ - a rich suite of tools to support success that surround the performer and are aligned with how they think, work, and learn - provides an endpoint to start towards. Every organization’s path will be different, but a good start is to start doing formal learning right, begin looking at performance support, and commence working on the social media infrastructure.
An associated focus is building a meaningful infrastructure (hint: one all-singing all-dancing LMS is not the answer). A strategy to get there is a companion effort. And, ultimately a learning culture will be necessitated. Yet these components are not just a necessary component for L&D, they are the necessary components for a successful organization, one that can be agile enough to adapt to the increasing rate of change we are facing.
And here is the first step: L&D has to become a learning organization. Mantras like ‘work out loud’, ‘fail fast’, and ‘reflect’ have to become part of the L&D culture. L&D has to start experimenting and learning from the experiments. Let us ensure that the past 15 years are a hibernation we emerge from, not the beginning of the end.
Here’s to change for the better. May 2015 be the best year yet!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:16am</span>
|