Blogs
In talking with my ITA colleagues yesterday, we were discussing the necessity of going into the office, or not. And it seemed that there were times it made sense, and times it didn’t.
What doesn’t make sense is trying to do work in an office. If you need to think, having random conversations and interruptions happen gets in the way. Yes, you need colleagues and resources ‘to hand’, but that’s available digitally and distally.
Being together makes sense, it seemed to us, when you either are meeting for the first time (e.g. with clients), or want creative friction. You can interact virtually for planned work, but it helps to interact F2F when getting to know one another, and when you’re looking for serendipitous interactions. Jay Cross, in his landmark Informal Learning book, talked about how offices were being designed to have the mail room and coffee in the same place, to facilitate those interactions. If conversations are the engine of business, having the opportunity for their occurrence is useful.
This seems the opposite of most visions of work: work away from the office, interact in the office, instead of the reverse. So, is this the flipped office?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:42am</span>
|
I’ve been watching the Olympics, at least select bits (tho’ I’m an Olympics widower; the full panoply is being watched in the house). And I enjoy seeing some of the things that I can relate to, but I realize that the commentators make a big difference.
The distinction seems to come down to a fundamental difference (aside from the ones that drill into uncomfortable zones with a zeal that seems to be a wee bit inhuman): the ones who describe what’s happening versus those who explain. Let me explain.
Description isn’t a bad thing, so when I watch (American) football, a guilty pleasure, the play-by-play commentator tends to describe the action, helping to ascertain what’s happened in case it’s confounded by intervening people, bad camera angles, interruptions, what have you. Similarly, I’ve been listening to Olympic commentators describe the action in case I have missed it. And that’s helpful.
But in football, the color commentator (often a player or coach), interprets or explains what happened, and interprets it. Similarly, the good commentary on Olympics has someone explain not just what happened (X just made a spectacular run), but why (Y was absorbing the forces better, minimizing the elements that would detract from speed). And this is really important.
I have a former mentor, colleague, and friend who is now part of an organization that enhances sports broadcasts with additional information; it’s a form of augmented reality showing things that started with first down lines in football but now includes things like wind and tracking information in the America’s cup. Similarly, I have loved the overlays of one person’s performance against another. The point is providing insight into the context, and more importantly the thinking behind the performance.
The relevance I’m seeing is that showing the underlying concepts help inform the exceptional performance, help educate about the nuances, and help support comprehension. This relates so much to what we need to be doing in business. Working and learning out loud is so important to transfer skills across the organization. Showing the thinking helps spread the understanding. Whether it’s breaking from the pack in snow cross, or closing deals, having the thinking annotated is essential for spreading learning.
Whether it’s a retrospective by the performer or expert commentary, explaining, not just describing, is important. Does my explanation make sense? :)
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:42am</span>
|
While we were at the Training 2014 conference, I was interviewed by Bryan Austin of GameOn Learning about what my crusade is. And they have been kind enough to host it.
In the video, Bryan asks me about what I’m causing trouble about, why it’s important, and what people might do. It’s all the stuff I’m fired up about, and if you’d like to hear me talk about it instead of reading it, this is an easy way. It’s not the only thing I’m stirring up this year, but it’s arguably the largest. Check it out!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:42am</span>
|
Dorian Peters has written the first book I’ve seen on UI for learning, linking two of my favorite things. Understand that I did my Ph.D. in a research group that was hot into HCI at the time, and my first faculty position was to teach User Experience. At the time, in many ways UI was ahead of ID in terms of user-centered practices, and I made many presentations on porting UI concepts to Ed Tech audiences.
Consequently, it was a pleasant surprise to hear about this book, and more so now that I’ve had a chance to peruse it. This book is very valuable not just for interface designers doing learning solutions, but also for IDs and developers who end up having to design. The second chapter on how we learn is a great whirlwind tour of learning, well grounded in research and setting up the background for those who’s background isn’t learning. Similarly, she provides an overview of elearning in Chapter 3, and UI basic terms in Chapter 4. From there, it’s all about UI for learning.
She starts very early on in the book by showing how learning interfaces have to be different from user interfaces. If your goal is to learn, not do a task, it makes sense that the interface should and could be different. She then delivers on this and goes on to cover a suite of principles: learning is visual, learning is social, learning is emotional, and learning is mobile, in subsequent chapters (with one on multimedia and gaming interspersed). She even discusses the design of learning spaces. In each, she separates out principles and strategies.
This is a fun book, widely illustrated with examples and illustrations, quotes, and graphical highlighting to practice what she preaches. It is clear from the breadth and depth of citations that she’s done her homework, and this is a well-organized, easy to read, and useful book.
Interface Design for Learning is a book that everyone who ends up developing learning experiences, creating the interface learners interact with, needs to have to hand, on their desk ready to refer to and get the principles down on each project until they’re firmly internalized. Highly recommended.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:42am</span>
|
I have never really cottoned on to the practice of photo-bombing. While it might be a fun trick to play on a friend, otherwise it seems to me to be selfish. Could there be something better?
One of the things I’ve been doing is something that I call ‘reverse photo-bombing’. When I see a picture being taken, instead of getting in it, I get behind the photographer, and at the right time, I put up bunny ears behind them (or something else silly). What happens is that the audience laughs, and they tend to get a much better picture. And then I slink off, hoping no one noticed (except the photographees, and they’re too busy). It’s hard to get the timing right, so it doesn’t always work, but when it does I think it’s a boon to the group. Though it did embarrass my daughter when I did it while out with her one time, but I think that’s in the parental job description anyway…:).
I think this is a good thing (though I’m willing to be wrong); I think that the world can use more good in it. What I am looking for is more ideas of how we can be quietly adding value to what’s going on, instead of detracting. I’ve heard of nice things like buying someone else’s coffee, or providing extra change. Are there other ideas we can be using? I welcome hearing yours!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:41am</span>
|
David Kelly of the eLearning Guild has a series of interviews going on about attending conferences. The point is to help attendees get some good strategies about how to prepare beforehand and take advantage after the fact, as well as what to bring and how to get the most out of it.
Today’s interview is me, and you’re welcome to have a look at my thoughts on conferences. Feedback welcome!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:41am</span>
|
A few weeks ago, I mentioned that this was a year of making trouble, and talked about my forthcoming book, but now it’s time to let you in on the second thing I’m doing. This time, I’m not doing it alone, but in concert with three of my most respected and trusted colleagues, Michael Allen, Julie Dirksen, and Will Thalheimer. So what are the four of us up to?
Well, I won’t give it all away, since we’re doing an official launch next week, but in short, we’re attempting to do something about what we perceive as the sorry state of elearning. We just couldn’t stand by, so we’re standing up and saying something. It’s been a real pleasure to work with them, and we’re hoping what we’re up to might make an impact.
You’ll also find out that a number of folks have signed up to support us as trustees. Not everyone we could and should’ve gotten, but a representative sample across sectors of some of the most respected folks in the industry that we could reach out to in short order.
You can find out what we’ve done on Thurs, March 13th at noon PT (3ET). We’re holding a Google Hangout where we’ll talk about what we’re up to, and then take questions. You can sign up to attend at the associated site.
It’s an honor to be able to work with Will, Julie, & Michael on this, and if you care about good elearning (and if you’re here, I figure that’s a safe bet :), I hope you’ll attend, and join us.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:41am</span>
|
I’ve been thinking about what are the core elements involved in making an organization successful, and it’s beginning to sort out in a new way for me. And I wanted to run it by you and see what you thought.
The first elements to me are the holy trinity of the performance ecosystem:
Formal Learning
Performance Support
Social learning
There are several things to notice here. For one, self-created performance support tools also fall under performance support. However, performance support tools created by others, and not by L&D, fall under social (yes, I’m still coming to grips with the whole informal/social distinction, shameful ain’t it!). And, social learning is the Big L version of learning, including problem-solving, research, innovation, the things that fall out from cooperation and collaboration.
Now, underpinning this trilogy is another trilogy, those factors that provide a foundation. Here I’m talking about:
Strategy
Culture
Infrastructure
Strategy is systematically aligning what the L&D group is doing with the business needs, measuring what’s happening, and providing a growth path. However, strategy will get eaten by culture unless you specifically address and develop a culture where innovation can happen. And underpinning this is a technology infrastructure that complements the way we work best. This include mobile.
So, does this make sense?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:41am</span>
|
Wow, you try to do one little thing, and everyone gets all upset! Well, that’s how it feels, and it’s a real lesson. So I’ll explain, and then try to clarify.
As I posted, one of the two things I’m pushing is something that’s trying to improve elearning, and we’re having our launch on Thurs, March 13th at noon PT (3ET). To get attention, the four of us (Michael Allen, Julie Dirksen, and Will Thalheimer are my con-conspirators) have been teasing the event, trying to build awareness. And this has turned out to be a problem we didn’t anticipate.
Our goal was to use our names by capitalizing on the situation what while the four of us who, while friends and colleagues, were independent of one another professionally, we had banded together on this initiative. We believed, naively, that people would infer our intentions to be benign. And many did.
Including the trustees we’re so grateful to. We briefed a handful of respected individuals around the industry (not everyone we could and should, but a representative sample across many sectors that we could work with quickly), and got them to lend their names in support.
So we started our marketing, including the site, a press release, and our social media efforts. And learned that what was obvious to us wasn’t obvious to others. There were clear concerns that the focus was on us, not on the message, and that our motives were dubious.
We received both private and publicly expressed concerns about our intentions. Maybe we were trying to promote a book, or a consultancy, or collecting email addresses. And this was an unpleasant surprise. When I have a chance to work with people like Michael, Julie, and Will that I respect for their intellect, concern, and integrity, it is painful to have our motives questioned.
Yet it was an clear miscalculation on our parts that our intentions would be obvious to all. As soon as we got wind of the concerns, we discussed how to respond, and as a consequence, we reined in the messages about us on the site. We removed our pictures from the pre-launch page, and toned down the ‘authors’ page. Hopefully that’s enough.
Because, the message is the important thing. Frankly, we’d prefer that the change happens and we get no recognition. It’s not about us; we’ve got other fish to fry. We’ve no joint book, no consultancy, and the only reason we’d do anything with any email addresses would be to tell them updates with nothing for sale. We believe that the message would be sullied with any such attempts, and we do not want to risk the chance of undermining the message, and the hoped-for change.
So, a valuable lesson learned about marketing. Trying to inspire curiosity using a launch event, and trusting to our names beforehand was, in retrospect, too self-aggrandizing. We probably needed to focus on at least the core of the message, rather than just the mystery of what we were up to. We still hope you’ll attend, and more importantly agree to try harder on the change we’re agitating for. As to the change? Well, the short answer is better elearning. For the specifics, you’ll just have to wait :). BTW, in addition to the launch, at least a subset of us will be discussing the desired change at Learning Solutions session 105 on Wednesday March 19 at 1PM, followed by a Morning Buzz on Thursday. Hope to see you at one of these!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:40am</span>
|
The main complaint I think I have about the things L&D does isn’t so much that it’s still mired in the industrial age of plan, prepare, and execute, but that it’s just not aligned with how we think, learn, and perform, certainly not for information age organizations. There are very interesting rethinks in all these areas, and our practices are not aligned.
So, for example, the evidence is that our thinking is not the formal logical thinking that underpins our assumptions of support. Recent work paints a very different picture of how we think. We abstract meaning but don’t handle concrete details well, have trouble doing complex thinking and focusing attention, and our thinking is very much influenced by context and the tools we use.
This suggests that we should be looking much more at contextual performance support and providing models, saving formal learning for cases when we really need a significant shift in our understanding and how that plays out in practice.
Similarly, we learn better when we’re emotionally engaged, when we’re equipped with explanatory and predictive models, and when we practice in rich contexts. We learn better when our misunderstandings are understood, when our practice adjusts for how we are performing, and feedback is individual and richly tied to conceptual models. We also learn better together, and when our learning to learn skills are also well honed.
Consequently, our learning similarly needs support in attention, rich models, emotional engagement, and deeply contextualized practice with specific feedback. Our learning isn’t a result of a knowledge dump and a test, and yet that’s most of what see.
And not only do we learn better together, we work better together. The creative side of our work is enhanced significantly when we are paired with diverse others in a culture of support, and we can make experiments. And it helps if we understand how our work contributes, and we’re empowered to pursue our goals.
This isn’t a hierarchical management model, it’s about leadership, and culture, and infrastructure. We need bottom-up contributions and support, not top-down imposition of policies and rigid definitions.
Overall, the way organizations need to work requires aligning all the elements to work with us the way our minds operate. If we want to optimize outcomes, we need to align both performance and innovation. Shall we?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:40am</span>
|
I wrote up my visit to the Intelligent Content conference for eLearnMag, but one topic I didn’t raise was an unanswered question I raised during the conference: should the ‘smarts’ be in the content or the system? Which is the best way to adapt?
Now the obvious answer is the system. Making content smart would require a bunch of additional elements to the content. There would have to be logic to sense conditions and make changes. Simple adaptation could be built in, but it would be hard to revise them if you had new information. Having well-defined content and letting the system use contextual information to choose the content is the typical system used in the industry.
Let’s consider the alternative for a minute, however. If the content were adaptive, it wouldn’t matter what system it was running on, it would deliver the same capability. For example you could run under SCORM and still have the smart behavior. And you can’t adapt with a system if you’ve monolithic learning objects that contain the whole experience.
And, at the time I led a team building an adaptive learning engine, we did see adaptive content. However, we chose to have more finely granulated content, down to individual practice items, separate examples, concepts, and more. Even our introductions were going to have separate elements. We believed that if we had finely articulated content models, and rich tagging, we could change the rules that were running in the system, and get new adaptive behaviors across all the content with only requiring new rules in one place.
And if new tags were needed on the content objects, we could write programs to add necessary tags rather than have to hand-address every object. In the smart content approach, if you want to change the adaptation, you’re getting into the internals of every content piece.
We thought we had it right, and I still think that, for the reasons above, smart systems are the way to go, coupled with semantically tagged and well-delineated content. Happy to hear alternate proposals!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:40am</span>
|
We’ve already received the first request for an article on the Serious eLearning Manifesto, and it sparked a realization. We (my co-conspirators are Will Thalheimer, Julie Dirksen, and Michael Allen) launched the manifesto last week, and we really hope you’ll have a serious look at them. More, we hope you’ll find a way to follow them, and join your colleagues in signing on.
What has to happen now is people need to look at them, debate the difficulties in following them, and start thinking about how to move forward. We don’t want people just to sign on, we want them to put the principles into practice. You may not be able to get to all from the beginning, but we’re hoping to drive systematic change towards good elearning.
The Manifesto, if you haven’t seen it, touts eight values of serious elearning over what we see too often, focusing on the biggest gaps. The values are backed up by 22 principles pulled from the research. And we’ve been already been called out for it perhaps being too ‘instructor’ driven, not social or constructivist enough. To be fair, we’ve also already had some strong support, and not just from our esteemed trustees, but signatories as well.
And I don’t want to address the issues (yet), what we want to have happen is to get the debate started. So I didn’t accept the opportunity to write (yet another) article, instead I said that we’d rather respond to an article talking about the challenges. We want to engage this as dialog, not a diatribe. Been there, done that, you can see it on the site ;).
So, please, have a look, think about what it would mean, consider the barriers, and let’s see if, together, we can start figuring out how to lift the floor (not close off the ceiling).
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:40am</span>
|
Soren Kaplan gave a keynote on innovation that nicely pulled together a number of strands around how to break through some of our cognitive traps.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:39am</span>
|
Douglas Merrill gave an entertaining and idiosyncratic presentation about data-driven decisions. Peppered with many amusing anecdotes about good and bad uses of data, he inspired us to do better.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:39am</span>
|
Cathy Davidson gave us an informative, engaging, and inspirational talk talking about how we’re mismatching industrial approaches in an information era. She gave us data about how we work and why much of what we do isn’t aligned, along with the simple and effective approach of think-pair-share. Very worthwhile.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:39am</span>
|
The launch of the Manifesto has surfaced at least a couple of issues that are worth addressing. The first asks who the manifesto is for, and what should they do differently. That’s a principled response. The second is just how to work differently in the existing situations where the emphasis is on speed. That’s a more pragmatic response. There are not necessarily easy answers, but I’ll try. Today I’ll address the first question, and tomorrow the second.
To the first point, what should the impact be on different sectors? Will Thalheimer (fellow instigator), laid out some points here. My thoughts are related:
Tool vendors should ensure that their tools can support designers interested in these elements. In particular, in addition to presentation of multimedia content, there needs to be: a) the ability to provide separate feedback for different choices, b) the ability to have scenario interactions whereby learners can take multistep decision paths mimicking real experiences, and c) the ability to get the necessary evaluation feedback. In reality, the tools aren’t the limitation, though some may make it more challenging than others. The real issue is in the design.
We’d like custom content houses (aka elearning solution providers) to try to get their clients to allow them to work against these principles, and then do so. Of course, we’d like them to do so regardless! I’ve argued in the past that better design doesn’t take longer. Of course, we realize that clients may not be willing to pay for testing and revision, but that’s the second part…
…we’d like purchasers of custom content to ask that their learning experiences meet these standards, and expect and allow in contracts for appropriate processes. If you’re going to pay for it, get real value! Purchasers need to become aware that not meeting these standards increases the likelihood that any intervention will be of little use.
Similarly, if you’re buying pre-made content (aka shelfware), you should check to see if it also meets these standards. It’s certainly possible!
Managers and executives, whether purchasing or overseeing in-house teams, ideally will be insisting that these standards be met. They should start revising processes both external (e.g. RFPs) and internal (templates, checklists and reviews) to start meeting these criteria.
And designers and developers should start building this into their solutions (within their constraints) while beginning to promote the longer term picture.
Of course, we realize that there are real world challenges. The first is that the internal elearning unit will have to be working with the business units about taking a richer and more meaningful approach. Those units may not be ready to consider this! The ‘order taker’ mentality has become rife in the industry, and it’s hard for a L&D unit to suddenly change the rules of engagement. It will take some education around the workplace, but to ensure that the efforts are really leading to meaningful change mean it’s critical.
The second caveat is that not all of these elements will be addressable from day 1. While we’d love that to be the case, we recognize that some things will be easier than others. Focusing on meaningful objectives and, relatedly, meaningful practice are the two first priorities. (While I suspect my colleagues might instead champion measurement, I’m hopeful that making more meaningful practice will drive better outcomes. Then, there’ll be a natural desire to check the impact.) When the meaningful focus is accomplished, trimming extraneous content becomes easier.
The goal is to hit the core eight values first, as these are the biggest gaps we see, and integrate many of the principles: performance focused, meaningful to learners, individualized challenges, engagement-driven, authentic contexts, realistic decisions, real-world consequences, and spaced practice. With those, you’ve got a real start on making a difference. And that’s what we’re about, eh? We hope you’ll sign on!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:39am</span>
|
Yesterday, I posted about what we might like to see from folks, by role, in terms of the Manifesto. The other question to be answered is how to do this in the typical current situation where there’s little support for doing things differently. Let me take a worst-case scenario and try to take a very practical approach. This isn’t an answer for the pulpit, but is for the folks who put all this in the ‘too hard’ basket.
So, worst case: you’re going to still get a shower of PPTs and PDFs and be expected to make a course out of it, maybe (if you’re lucky) with a bit of SME access. And no one cares if it makes a difference, it’s just "do this". And, first, you have my deepest sympathies. We’re hoping the manifesto changes this, but sometimes we have to start with where you live, eh? Recognize that the following is not PoliticallyCorrect™; I’m going outside the principled response to give you an initial kickstart.
The short version is that you’ve got to put meaningful practice in there. You need an experience that sets up a story, requires a choice using the knowledge, and lets the learner see the consequences. That’s the thing that has the most impact, and you’ll want several. This will have far more impact than a knowledge test. To do that isn’t too complex.
The very first thing you need to do when you’ve parsed that content is to figure out what, at core, the person who’s going to have this experience should be able to do differently. What performance aren’t they doing now? This is problematic, because sometimes the problem isn’t a performance problem, but here I’m assuming you don’t have that leeway. So you’ll have to do some inference. Yes, it’s a bit more thinking, but you already have to pull out knowledge, so it’s not that different (and gets easier with practice).
Say you’ve gotten product data. How would they use that? To sell? To address objections? To trouble shoot? Maybe it’s process information you’re working on. What would they do with that? Recognize problems? Take the next step? If you’re given information on workplace behavior problems? Let them determine whether grey areas exist, or coach people.
You’ll need to make a believable context and precipitative situation, and then ask them to respond. Make it challenging, so that the situation isn’t clear, and the alternative are plausible ways the learner could go wrong. The SME can help here. Make the scenario they’re facing and the decisions they must make as representative of the types of problems that they’ll be facing as you can. And try to have the story play out, e.g. the consequences of their choice be presented before they get the right answer or feedback about why it’s wrong. There are good reasons for this, but the short version is it’s to help them learn to read the situation when it’s real.
Let’s be clear, this is really just better multiple choice question design! I say that so you see you’re not going beyond what you already do, you’re just taking a slightly different tack to it. The point is to work within the parameters of content and questions (for now!), and yet get better outcomes.
Ideally, you’ll find all the plausible application scenarios, and be able to write multiple questions. If there’s any knowledge they have to know cold, you might have to also test that knowledge, but consider designing a job aid. (Even if it’s not tested and revised, which it should be, it’s a start on the path.)
There’s more, but that’s a start (more in my next post). Focus on meaningful practice first. Dress it up. Exaggerate it. But if you put good practice in their path, that’s probably the most valuable change to start with. There’re lots of steps from there, basically turning it into a learning experience: making everything less dense, more minimal, more focused on performance, adding in more meaningfulness. And redoing concept, example, introduction, etc. But the first thing, valuable practice, engages many of the eight values that form the core of the Manifesto: performance focused, meaningful to learners, engagement-driven, authentic contexts, realistic decisions, and real world consequences.
I’ve argued elsewhere that doing better elearning doesn’t take longer, and I believe it. Start here, and start talking about what you’re doing with your colleagues, bosses, what have you. Sign on to the Manifesto, and let them know why. And let me know how it goes.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:38am</span>
|
In my last post, I wrote about the first step you should take to move to Serious eLearning, which was making deeper practice. Particularly under the constraints of not rocking the boat. Here I want to talk about where you go from there. There are several followup steps you should take after (hopefully) success at the beginning. My big three are: aligning with the practice, extending the practice, and evaluating what is being done.
1. So, if you took the advice to make more meaningful and applied practice within the constraints of many existing workplaces (order-taking, content dump, ‘just do it’), you next want to be creating content aligned with helping the learner succeed at the practice. Once you have those practice questions, you should trim all that material to just what they’ll need to be able to make those decisions.
This also means stripping away unnecessary content, jettisoning the nice-to-know, trimming down the prose (we overwrite). By stripping away the content, you can work in more practice and still meet the (nonsensical) criteria of time in seat. And you’ll have to fight the forces of ‘it has to be in there’, but it’s a worthy fight, and part of the education of the organization that needs to occur.
Get some war stories from your SMEs while you’re working (or fighting) with them. Those should be your examples, and guide your practice design. But if you can’t, you’ll just have to do the best you can. Make the introduction help learners see what they’ll be able to do afterwards. All this fits within the standard format, so you should be able to get away with it and still be taking a stab at improving what you’re doing.
2. The second step is to extend practice. I mean this in two ways. For one, massed practice dissipates quickly, and you want practice spaced out over time. This may be a somewhat hard sell, yet it’s really required for learning to stick. Another part of the organization’s education. You should be developing some extra content at development time for streaming out over time, but breaking up your course so that the hour of seat time is 30 or 40 mins up front, and then 20 or 30 mins of followup spread out over days and with repeated practice will make learning stick way more than not. And if it matters, you should (if it doesn’t, why bother?).
The second way to extend it is to work on the meaningfulness of your practice. Ideally, practice would be deep, simulations or at least scenarios. The situations that will most define company success are, I will suggest, in complex contexts. To deal with those, you need practice in complex contexts: serious games or at least scenarios. And don’t make them boring, exaggerate so that the practice is as motivating as the real world situation is. Ultimately, you’d like learners creating solutions to real world problems like creating business deliverables, or performing in immersive environments, not answering multiple choice questions! And extending the experience socially: whether just reflecting on the experience together, or better yet, collaborative problem solving.
3. Finally, you should start measuring what you’re doing in important ways. This, too, will require educating your organization. But you shouldn’t assume your first efforts are working. You want to start with the change in the business that needs improving (e.g. performance consulting and Kirkpatrick level 4), then figure out what performance by individuals would lead to that business change, and then develop your learning objectives and practice to get people able to do that performance. And then measure whether they can, and whether it leads to performance changes in the workplace, and ultimately changes in the business metrics. This will require working with the business units to get their data, but ultimately that’s how you become strategic.
Of course, you should be measuring your own work, and similarly if your interventions are as efficient as possible. But those should only happen after you’re having an impact. Measuring your efficiency ("our costs per seat time are at the industry average") without knowing whether you have an impact is delusional. Are your estimates of time to accomplish accurate? Are you using resources efficiently? Are people finding your experiences to be ‘hard fun’? These matter after the question of: "are we helping the organizations needs?"
So, between the previous post and this, hopefully you have some concrete ideas about how even in the most constrained circumstances you can start improving your learning design. And the Manifesto supporting principles go into more depth on this, if you need help. So, does this provide some guidance on how to get started? Ready to sign on? And, perhaps more importantly, what further questions do you have?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:38am</span>
|
My latest tome, Revolutionize Learning & Development: Performance and Innovation Strategy for the Information Age is out. Well, sort of. What I mean is that it’s now available on Amazon for pre-order. Actually, it’s been for a while, but I wanted to wait until there was some there there, and now there’s the ‘look inside’ stuff so you can see the cover, back cover (with endorsements!), table of contents, sample pages, and more. Ok, so I’m excited!
What I’ve tried to do is make the case for dragging L&D into the 21st Century, and then provide an onramp. As I’ve been saying, my short take is that L&D isn’t doing what it could and should be doing, and what it is doing, it is doing badly. But I don’t believe complaining alone is particularly helpful, so I’m trying to put in place what I think will help as well. The major components are:
what’s wrong (you can’t change until you admit the problem :)
what we know about how we think, work, and learn that we aren’t accounting for
what it would look like if we were doing it right
ways forward
By itself, it’s not the whole answer, for several reasons. First, it can’t be. I can’t know all the different situations you face, so I can’t have a roadmap forward for everyone. Instead, what I supposed you could think of is that it’s a guidebook (stretching metaphors), showing suggestions that you’ll have to sequence into your own path. Second, we don’t know all yet. We’re still exploring many of these areas. For example, culture change is not a recipe, it’s a process. Third, I’m not sure any one person can know all the answers in such a big field. So, fourth, to practice what I’m preaching, there should be a community pushing this, creating the answers together.
A couple of things on that last part, the first one is a request. The community will need to be in place by the time the book is shipping. The question is where to host it. I don’t intend to build a separate community for it on the book site, as there are plenty of places to do this. Google groups, Yahoo groups, LinkedIn…the list goes on. It can’t be proprietary (e.g. you have to be a paid member to play). Ideally it’d have collaborative tools to create resources, but I reckon that can be accommodated via links. What do you folks think would be a good choice?
The second part of the community bit is that I’m very grateful to many people who’ve helped or contributed. Practitioner friends and colleagues provided the five case studies I’ve the pleasure to host. Two pioneers shared their thoughts. The folks at ASTD have been great collaborators in both helping me with resources, and in helping me get the message out. A number of other friends and colleagues took the time to read an early version and write endorsements. And I’ve learned together with so many of you by attending events together, hearing you speak, reading your writings, and having you provide feedback on my thoughts via talking or writing to me after hearing me speak or commenting on my scribblings here.
The book isn’t perfect, because I have thought of a number of ways it could be improved since I provided the manuscript, but I have stuck to the mantra that at some point it’s better out than still being polished. This book came from frustration that we can be doing so much better, and we’re not. I didn’t grow up thinking "I’m going to be a revolutionary", but I can’t not see what I see and not say something. We can be doing so much better than we are. And so I had to be willing to just get the word out, imperfect. It wasn’t (isn’t) clear that I’m the best person to call this out, but someone needs to!
That said, I have worked really hard to have the right pieces in place. I’ve collected and integrated what I think are the necessary frameworks, provided case studies and a workplace scenario, and some tools to work forward. I have done my best to provide a short and cogent kickstart to moving forward.
Just to let you know that I’m starting my push. I’ll be presenting on the book at ASTD’s ICE conference, and doing some webinars. Bryan Austin of GameOn Learning interviewed me on my thoughts in this direction. I do believe in the message, and that it at least needs to be heard. I think it’s really the necessary message for L&D (in it, you’ll find out why I’m suggesting we need to shift to P&D!). Forewarned! I look forward to your feedback.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:38am</span>
|
A number of years ago I wrote a series on design heuristics that emerged by looking at our cognitive limitations and practices from other field. One of the practices I covered briefly in one of the posts was egoless design, and a recent conversation reminded me of it.
The context for this is talking about how to improve our designs. One of the things from Watts Humphrey’s work on software design was that if we don’t scrutinize our own work, we’ll have blindspots that we’re unaware of. With regular peer review, he substantially improved code quality outcomes. Egoless programming was all about getting our ego out of the way while we worked.
This applies to instructional design as well. Too often we have to crank it out, and we don’t test it to see if it’s working. Instead, if it’s finished, it is good. How do we know? It’s very clear that there are a lot of beliefs and practices about design that are wrong. Otherwise, we shouldn’t have this problem with elearning avoidance. There’s too much bad elearning out there. What can we do?
One of the things we could, and should do, is design reviews. Just like code reviews, we should get other eyes looking at our work. We should share our work at things like DemoFest, we should measure ourselves against quality criteria, and we should get expert reviews. And, we should set performance metrics and measure against them!
Of course, that alone isn’t good enough. We have to redesign our processes once we’ve identified the flaws, to structure things so that it’s hard to do bad design, and doing good design flows naturally. And then iterate.
If you don’t think your work is good enough to share, you’re not doing good enough work. And that needs to change. Get started: get feedback and assistance in moving forward. Just hearing talks about good design isn’t a bad start, but it’s not enough. You’ve got to look at what you are doing, get specifically relevant feedback, and then get assistance in redesigning your design processes. Or you won’t know your own limitations. It’s time to get serious about your elearning; do it as if it matters. If not, why do it at all?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:37am</span>
|
A colleague wondered if the image on the cover of the new book was a PDA, and my initial response was that the convergence of capabilities suggested the demise of the PDA. But then I had a rethink…
For what is a PDA? It’s a digital platform sans the capability of a cellular voice channel. My daughter got an iPod touch, but within a year we needed to get her a new phone, and it’s an iPhone. Which suggests that a device without phone capability is increasingly less feasible.
But wait a minute, there are plenty of digital devices sans voice. In fact, I have one. It’s a tablet! It may have cellular data, but it certainly doesn’t have voice. And while people are suggesting that the tablet is done, I’m not interested in a phablet, as I already have a problem with a phone in my pocket (putting me in the fashion faux pas category of liking a holster), and I think others want something smaller that they can have all the time.
So, I’ve argued elsewhere that mobile devices have to be handheld, and that tablets have usage patterns different than pocketables. But I think in many instances tablets do function as personal digital assistants, when you’re not constrained by space. There are advantages to the larger screen. So, while I think the pocketable version of the PDA is gone (since having a phone and a PDA seems redundant), the non-phone digital assistant is going to persist for the larger form factor. What am I missing?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:37am</span>
|
It’s a well-known phenomena that new technologies get used in the same ways as old technologies until their new capabilities emerge. And this is understandable, if a little disappointing. The question is, can we do better? I’d certainly like to believe so! And a conversation on twitter led me to try to make the case.
So, to start with, you have to understand the concept of affordances, at least at a simple level. The notion is that objects in the world support certain action owing to the innate characteristics of the object (flat horizontal surfaces support placing things on them, levers afford pushing and pulling, etc). Similarly, interface objects can imply their capabilities (buttons for clicking, sliders for sliding). They can be conveyed by visual similarity to familiar real-world objects, or be completely new (e.g. a cursor).
One of the important concepts is whether the affordance is ‘hidden’ or not. So, for instance, on iOS you can have meaningful differences between one, two, three, and even four-fingered swipes. Unless someone tells you about it, however, or you discover it randomly (unlikely), you’re not likely to know it. And there’re now so many that they’re hard to remember. There are many deep arguments about affordances, and they’re likely important but they can seem like ‘angels dancing on the head of a pin’ arguments, so I’ll leave it at this.
The point here being that technologies have affordances. So, for example, email allows you to transmit text communications asynchronously to a set group of recipients. And the question is, can we anticipate and leverage the properties and skip (or minimize) the stumbling beginnings.
Let me use an example. Remember the Virtual Worlds bubble? Around 2003, immersive learning environments were emerging (one of my former bosses went to work for a company). And around 2006-2009 they were quite the coming thing, and there was a lot of excitement that they were going to be the solution. Everyone would be using them to conduct business, and folks would work from desktops connecting to everyone else. Let me ask: where are they now?
The Gartner Hype Cycle talks about the ‘Peak of Inflated Expectations’ and then the ‘Trough of Disillusionment’, followed by the ‘Slope of Enlightenment’ until you reach the ‘Plateau of Productivity’ (such vibrant language!). And what I want to suggest is that the slope up is where we realize the real meaningful affordances that the technology provides.
So I tried to document the affordances and figure out what the core capabilities were. It seemed that Virtual Worlds really supported two main points: being inherently 3D and being social. Which are important components, no argument. On the other hand, they had two types of overhead, the cognitive load of learning them, and the technological load of supporting them. Which means that their natural niche would be where 3D would be inherently valuable (e.g. spatial models or settings, such as refineries where you wanted track flows), and where social would also be critical (e.g. mentoring). Otherwise there were lower-cost ways to do either one alone.
Thus, my prediction would be that those would be the types of applications that’d be seen after the bubble burst and we’d traversed the trough. And, as far as I know, I got it right. Similarly, with mobile, I tried to find the core opportunities. And this led to the models in the Designing mLearning book.
Of course, there’s a catch. I note that my understanding of the capabilities of tablets has evolved, for instance. Heck, if I could accurately predict all the capabilities and uses of a technology, I would be running venture capital. That said, I think that I can, and more importantly, we can, make a good initial stab. Sure, we’ll miss some things (I’m not sure I could’ve predicted the boon that Twitter has become), but I think we can do better than we have. That’s my claim, and I’m sticking to it (until proved wrong, at least ;).
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:37am</span>
|
As preface, I used to teach interface design. My passion was still learning technology (and has been since I saw the connection as an undergraduate and designed my own major), but there’re strong links between the two fields in terms of design for humans. My PhD advisor was a guru of interface design and the thought was "any student of his should be able to teach interface design". And so it turned out. So interface design continues to be an interest of mine, and I recognize the importance. More so on mobile, where there are limitations on interface real estate, so more cleverness may be required.
Stephen Hoober, who I had the pleasure of sharing a stage with at an eLearning Guild conference, is a notable UI design expert with a speciality in mobile. He had previously conducted a research project examining how people actually hold their phones, as opposed to anecdotes. The Guild’s Research Director, Patti Schank, obviously thought this interesting enough to extend, because they’ve jointly published the results of the initial report and subsequent research into tablets as well. And the results are important.
The biggest result, for me, is that people tend to use phones while standing and walking, and tablets while sitting. While you can hold a tablet with two hands and type, it’s hard. The point is to design for supported use with a tablet, but for handheld use with a phone. Which actually does imply different design principles.
I note that I still believe tablets to be mobile, as they can be used naturally while standing and walking, as opposed to laptops. Though you can support them, you don’t have to. (I’m not going to let the fact that there are special harnesses you can buy to hold tablets while you stand, for applications like medical facilities dissuade me, my mind’s made up so don’t confuse me :)
The report goes into more details, about just how people hold it in their hands (one handed w/ thumb, one hand holding, one hand touching, two hands with two thumbs, etc), and the proportion of each. This has impact on where on the screen you put information and interaction elements.
Another point is the importance of the center for information and the periphery for interaction, yet users are more accurate at the center, so you need to make your periphery targets larger and easier to hit. Seemingly obvious, but somehow obviousness doesn’t seem to hold in too much of design!
There is a wealth of other recommendations scattered throughout the report, with specifics for phones, small and large tablets, etc, as well as major takeaways. For example the implication from the fact that tablets are often supported means that more consideration of font size is needed than you’d expect!
The report is freely available on the Guild site in the Research Library (under the Content>Research menu). Just in time for mLearnCon!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:37am</span>
|
Towards Maturity is a UK-based but global initiative looking at organizations use of technology for learning. While not as well known in the US, they’ve been conducting research benchmarking on what organizations are doing and trying to provide guidance as well. I even put their model as an appendix in the forthcoming book on reforming L&D. So I was intrigued to see the new report they have just released.
The report, a survey of 2000 folks in a variety of positions in organizations, asks what they think about elearning, in a variety of ways. The report covers a variety of aspects of how people learn: when, where, how, and their opinion of elearning. The report is done in an appealing infographic-like style as well.
What intrigued me was the last section: are L&D teams tuned into the learner voice. The results are indicative. This section juxtaposes what the report heard from learners versus what L&D has reported in a previous study. Picking out just a few:
88% of staff like self-paced learning, but only 23% of L&D folks believe that learners have the necessary confidence
84% are willing to share with social media, but only 18% of L&D believe their staff know how
43% agree that mobile content is useful (or essential), but only 15% of L&D encourage mlearning
This is indicative of a big disconnect between L&D and the people they serve. This is why we need the revolution! There’s lots more interesting stuff in this report, so I strongly recommend you check it out.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:36am</span>
|