Blogs
Don followed up Roger (and graciously adapted his presentation to fit into a considerably shortened time slot). He made a clear and engaging argument about how things are changing and how a new mindset was needed.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:50am</span>
|
Abhijit used an unusual presentation deck of 2 sketch notes to present his very interesting thoughts and examples of living in perpetual beta, concluding that if L&D changes, it could be a catalyst for change. A message very synergistic with the Revolution ;).
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:50am</span>
|
Gary presented a passionate and compelling argument for the value of using the maker movement as a vehicle for education reform.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:50am</span>
|
I’ve talked in the past about the importance of engaging emotionally before beginning learning. And I’ve talked about the importance of understanding what makes a topic intrinsically interesting. But I haven’t really separated them out, as became clear to me in a client meeting. So let me remedy that here.
I’ve argued, and believe, that we should open up learners emotionally before we address them cognitively. Before we tell them what they’ll learn, before we show them objectives, we should create a visceral reaction, a wry recognition of "oh, yes, I do need to know this". It can be a dramatic or humorous exaggeration of the positive consequences of having the knowledge or the negative consequences of not. I call this a ‘motivating example’ different than the actual reference examples used to illustrate the model in context. In previous content we’ve used comics to point out the problems of not knowing, and similarly Michael Allen had a fabulous video that dramatized the same. Of course, you also have a graphic novel introduction of someone saving the day with this knowledge. It of course depends on your audience and what will work for them.
Another story I tell is when a colleague found out I did games, and asked if I wanted to assist him and his team. The task was, to me and many, not necessarily a source of great intrinsic interest, but he pointed out that he’d discovered that to practitioners, it was like playing detective. Which of course gave him a theme, and a overarching hook. And this is the second element of engagement we can and should lever.
Once we’ve hooked them into why this learning is important, we then want to help maintain interest through the learning experience. If we can find out what makes this particular element interesting, we should have it represented in the examples and practice tasks. This will help illuminate the rationale and develop learner abilities by integrating the inherent nature of the task into the learning experience.
Often SMEs are challenging, particularly to get real decisions out of, but here’s where they’re extremely valuable. In addition to stories illustrating great wins and losses that can serve as examples (and the motivating example I mentioned above), they can help you understand why this is intrinsically interesting to them. They’ve spent the time to become experts in this, we want to unpack why this was worth such effort. You may have to drill a bit below "make the world a better place", but you could and should be able to.
By hooking them in initially by making them aware of the role of this knowledge, and then maintaining interest through the learning experience, you have a better chance of your learning sticking. And that’s what we want to achieve, right?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:49am</span>
|
LTC Ho Mee Yin told the mstory of rethinking the learning design for the Singapore Armed Forces. She talked about some new frameworks that helped move to a more enlightened learning design that was more activity-centric, and a performance support tool for instructors.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:49am</span>
|
Charles, in an engaging story, set the changes in work and the world as a basis for the 70:20:10 framework as a way to think about supporting learning going forward. He elaborated the elements and the value to be uncovered via examples.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:49am</span>
|
Yes, I just did her plenary at Learning@Work, but there were some differences in emphasis. A nice overview again of what successful organizations are doing differently.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:49am</span>
|
In the process of looking at ways to improve the design of courses, the starting point is good objectives. And as a consequence, I’ve been enthused about the notion of competencies, as a way to put the focus on what people do, not what they know. So how do we do this, systematically, reliably, and repeatably?
Let’s be clear, there are times we need knowledge level objectives. In medicine or any other field where responses need to be quick and accurate, we need a very constrained vocabulary. SO drilling in the exact meanings of words is valuable, as an example. Though ideally, that’s coupled with using that language to set context or make decisions. So "we know it’s the right medial collateral ligament, prep for the surgery" could serve as a context, or we could have a choice to operate on the left or right atrial ventricle as a decision point. As Van Merriënboer’s 4 Component Instructional Design talks about, we need to separate out the knowledge from the complex problems we apply it to. Still, I suggest that what’s likely to make a difference to individuals and organizations is the ability to make better decisions, not recite rote knowledge.
So how do we get competencies when we want them? The problem, as I’ve talked about before, is that SMEs don’t have access to 70% of what they actually do, it’s compiled away. We then need good processes, so I’ve talked to a couple of educational institutions doing competencies, to see what could be learned. And it’s clear that while there’s no turnkey approach, what’s emerging is a process with some specific elements.
One thing is that if you’re trying to cover a whole college level course, you’ve got to break it up. Break down the top level into a handful of competencies. Then you continue to take each of those apart, and perhaps another level, ‘til you have a reasonable scope. This is heuristic, of course, but with a focus on ‘do’, you have a good likelihood to get here.
One of the things I’ve heard across various entities trying to get meaningful objectives is working with more than one SME. If you can get several, you have a better chance of triangulating on the right outcomes and objectives. They may well disagree about the knowledge, but if you manage the process right (emphasize ‘do’, lather, rinse, repeat), you should be able to get them to converge. It may take some education, and you may have to let them get the
Not just any SMEs will do. Two things are really valuable: on the ground experience to know what needs to be done (and doesn’t), and the ability to identify and articulate the models that guide the performance. Some instructors, for instance, can teach to a text but really aren’t truly masters of the content nor are experienced practitioners. Multiple helps, but the better the SME, the better the outcome.
I believe you want to ensure that you’re getting both the right things, and all the things. I’ve recommended to a client about triangulating not just with SMEs, but with practitioners (or, rather, the managers of the roles the learners will be engaged in), and any other reliable stakeholders. The point is to get input from the practice as well as the theory, identifying the models that support proper behavior, and the misconceptions that underpin where they go wrong.
Once you have a clear idea of the things people need to be able to do, you can then identify the language for the competencies. I’m not a fan of Bloom’s (unwieldy, hard to reliably apply), but I am a fan of Mager-style definitions (action, context, metric).
After this is done, you can identify the knowledge needed, and perhaps created objectives for that, but to me the focus is on the ‘do’, the competencies. This is very much aligned with an activity-based learning model, whereby you immediately design the activities that align with the competencies before you decide the content.
So, this is what I’m inferring. There would be good tools and templates you could design to go with this, identifying competencies, misconceptions, and at the same time also getting stories and motivations. (An exercise left for the reader. ;) The overall goal, however, of getting meaningful objectives is key to getting good learning design. Any nuances I’m missing?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:48am</span>
|
It’s too soon, so it’s hard to write this. My friend and colleague, Jay Cross, passed away suddenly and unexpectedly. He’s had a big impact on the field of elearning, and his insight and enthusiasm were a great contribution.
I had the pleasure to meet him at a lunch arranged by a colleague to introduce learning tech colleagues in the SF East Bay area. Several of us discovered we shared an interest in meta-learning, or learning to learn, and we decided to campaign together on it, forming the Meta-Learning Lab. While not a successful endeavor in impact, Jay and I discovered a shared enjoyment in good food and drink, travel, and learning. We hobnobbed in the usual places, and he got me invited to some exotic locales including Abu Dhabi, Berlin, and India.
Jay was great to travel with; he’d read up on wherever it was and would then be a veritable tour guide. It amazed me how he could remember all that information and point out things as we walked. He had a phenomenal memory; he read more than anyone I know, and synthesized the information to create an impressive intellect.
After Princeton he’d gone on for an MBA at Harvard, and amongst his subsequent endeavors included creating the first (online?) MBA for the University of Phoenix. He was great to listen to doing business, and served as a role model; I often tapped into my ‘inner Jay’ when dealing with clients. He always found ways to add more value to whatever was being discussed.
He was influential. While others may have quibbled about whether he created the term ‘elearning’, he definitely had strong opinions about what should be happening, and was typically right. His book Informal Learning had a major impact on the field.
He was also a raconteur, with great stories and a love of humor. He had little tolerance for stupidity, and could eviscerate silly arguments with a clear insight and incisive wit. As such, he could be a bit of a rogue. He ruffled some feathers here and there, and some could be put off by his energy and enthusiasm, but his intentions were always in the right place.
Overall, he was a really good person. He happily shared with others his enthusiasm and energy. He mentored many, including me, and was always working to make things better for individuals, organizations, the field, and society as a whole. He had a great heart to match his great intellect, and was happiest in the midst of exuberant exploration.
He will be missed. Rest in peace.
Some other recollections of Jay:
Harold Jarche
Jane Hart
Charles Jennings
Kevin Wheeler
Inge de Waard
Alan Levine
Curt Bonk
David Kelly
Brent Schlenker
Dave Ferguson
George Siemens
Mark Oehlert
Gina Minks
John Sener
Sahana Chattopadhyay
Christy Tucker
Adam Salkeld
Learning Solutions from the eLearning Guild
CLO Magazine
A twitter collection (courtesy of Jane Hart)
Bio from his graduating class.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:48am</span>
|
Too often I see instructional design training and tools, in addition to talking about ‘objectives’ and ‘assessment’ (which I tend to call ‘practice’, for hopefully obvious reasons), talking about ‘content’. And I think that simplification is a path to bad learning design. It misses emphasizing the nuances, and that’s a bad thing.
What should be the elements of content are an introduction to the learning experience, a presentation of the concept(s), examples that illustrate applying the concept to contexts, and a closing of the experience. Each of these have component parts that, when addressed, contribute to the likelihood of a good learning outcome. Ignoring them, however, is likely to lead to a lack of impact.
The problem is that our cognitive architecture is prone to mistakes in execution. We’re bad at remembering bits and pieces, and we naturally can skip steps. That’s why we create external tools like checklists and templates to support good design. So if we’re not scaffolding here, we run the risk of creating content that may be well-written, but isn’t well-designed.
And we see this all too often: eLearning that’s content-heavy and learning light. It may have good production values, with a consistent look-and-feel, elegant prose, and great images, but it also tends to have too much rote information, little enough concepts, sparse and un-illuminating examples, and no real emotional ‘hook’.
Instead, we could be using checklists or templates to ensure we get the right elements. We could have support for designing introductions, concept, examples, and closing, (and better support for good practice too ;). It doesn’t have to be built into an authoring tool, but certainly should be manifest in the development tools for interim representations.
There are other reasons to be a bit more granular, such as flexible content that supports repurposing for delivery in the moment, and adaptive learning, but overall the real reason is for good design. It doesn’t have to be granular, but it does have to explicitly consider the elements that contribute to learning and get those right. Right?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:48am</span>
|
In a recent conversation, we were talking about the Kirkpatrick model, and a colleague had an interesting perspective that hadn’t really struck me overtly. Kirkpatrick is widely (not widely enough, and wrongly) used as an evaluation tool, but he talked about using it as a design tool, and that perspective made clear for me a problem with our approaches.
So, there’s a lot of debate about the Kirkpatrick model, whether it helps or hinders the movement towards good learning. I think it’s misrepresented (including by its own progenitors, though they’re working on that ;), and while I’m open to new tools I think it does a nice job of framing a fairly simple but important idea. The goal is to start with the end in mind.
And the evidence is that it’s not being used well. The largest implementation of the model is level 1, which isn’t of use (correlation between learner reaction and actual impact is .09, essentially zero with a rounding error). Level 2 drops to a third of orgs, and it drops from there. And this is broken.
The point, and this is emphasized by the ‘design’ perspective, is that you are supposed to start with level 4, and work back. What’s the measurable indicator in the organization that isn’t up to snuff, and what behavior (level 3) would likely impact that? And how do we change that behavior (Level 2)? And here’s where it can go beyond training: that intervention might be a job aid, or access to a network (which hasn’t been much in the promotion of the model).
To be fair, the proponents do argue you should be starting at Level 4, but with the numbering (which Don admits he might have got wrong) and the emphasis on evaluation, it doesn’t hit you up front. Using it as a design tool, however, would emphasize the point.
So here’s to thinking of learning design as working backwards from a problem, not forwards from a request. And, of course, to better learning design overall.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:48am</span>
|
There’s recently been a spate of attacks on 70:20:10 and moving beyond courses, and I have to admit I just don’t get it. So I thought it’s time to set out why I think these approaches make sense.
Let’s start with what we know about how we learn. Learning is action and reflection. Instruction (education, training) is designed action and guided reflection. That’s why, by the way, that information dump and knowledge test isn’t a learning solution. People need to actively apply the information.
And it can’t follow an ‘event’ model, as learning is spaced out over time. Our brains can only accommodate so much (read: very little) learning at any one time. There needs to be ongoing facilitation after a formal learning experience - coaching over time and stretch assignments - to help cement and accelerate the learning experience.
Now, this can be something L&D does formally, but at some point formal has to let go (not least for pragmatics) and it becomes the responsibility of the individual and the community. It shifts from formal coaching to informal mentoring, personal exploration, and feedback from colleagues and fellow practitioners. It’s impractical for L&D to take on this full responsibility, and instead becomes a role in facilitation of mentoring, communication, and collaboration.
That’s where the 70:20:10 framework comes in. Leaving that mentoring and collaboration to chance is a mistake, because it’s demonstrably the case that people don’t necessarily have good self-learning skills. And if we foster self-learning skills, we can accelerate the learning outcomes for the organization. Addressing the skills and culture for learning, personally and collectively, is a valuable contribution that L&D should seize. And it’s not about controlling it all, but making an environment that’s conducive, and facilitating the component skills.
Further, some people seem to get their knickers in a twist about the numbers, and I’m not sure why that is. People seem comfortable with the Pareto Principle, for instance (aka the 80/20 rule), and it’s the same. In both cases it’s not the exact numbers that matter, but the concept. For the Pareto Rule it’s recognizing that some large fraction of outcomes comes from a small fraction of inputs. For the 70:20:10 framework, it’s recognizing that much of what you apply as your expertise comes from things other than courses. And tired old cliches about "wouldn’t want a doctor who didn’t have training" don’t reflect that you’d also not want a doctor who didn’t continue learning through internships and practice. It’s not denying the 10, it’s augmenting it.
And this is really what Modern Workplace Learning is about: looking beyond the course. The course is one important, but ultimately small, piece of being a practitioner, and organizations can no longer afford to ignore the rest of the learning picture. Of course, there’s also the whole innovation side and performance support when learning doesn’t have to happen as well, which is something L&D also should facilitate (cue the L&D Revolution), but getting the learning right by looking at the bigger picture of how we really learn is critical.
I welcome debate on this, but pragmatically if you think about how you learned what you do, you should recognize that much of it came from other than courses. Beyond Education, the other two E’s have been characterized as Exposure and Experience. Doing the task in the company of others, socially learning, and by the outcomes of actually applying the knowledge in context, and making mistakes. That’s real learning, and the recognition that it should not be left to chance is how these frameworks help raise awareness and provide an opportunity for L&D to become more relevant to the organization. And that, I strongly believe, is a valuable outcome. So, what do you think?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:47am</span>
|
A recent post by my colleague in the Internet Time Alliance, Jane Hart, has created quite the stir. In it, she talks about two worlds: an old world and a new world of workplace learning. And another colleague from the Serious eLearning Manifesto, Will Thalheimer, wrote a rather ‘spirited’ response. I know, respect, and like both these folks, so I’m wrestling with trying to reconcile these seemingly opposite viewpoints. I tried to point out why I think the new perspective makes sense, but I want to go deeper.
Jane was talking about how there’s a split emerging between old-school L&D and new directions. This is essentially the premise of the Revolution, so I’m sympathetic. She characterized each, admittedly in somewhat stark contrast, representing the past with a straw man portrait of an industrial era, and a similar version of a new and modern approach much more flexible and focused on outcomes, not on the learning event. And I’ve experienced much of the former, and recognize the value of the latter. It’s of course not quite as cut-and-dried, but Jane was making the case for change and using a stark contrast as a motivator.
Will responded to Jane with some pretty strong language. He acknowledged her points in a section where he talks about points of agreement, but then after accusing her of being too broad brush, he commits the same in his section on Oversimplifications. Here he points out extreme views that he implies are the views being painted, but are overly stated as "always" and "never".
Look, Will fights for the right things when he talks about how formal learning could be better. And Jane does too, when she looks to a more enlightened approach. So let’s state some more reasonable claims that I hope both can agree with. Here I’m using Will’s ‘oversimplifications’ and infusing them with the viewpoints I believe in:
Learners increasingly need to take responsibility for their learning, and we should facilitate and develop it instead of leaving it to chance
Learning can frequently be trimmed (and more frequently needs to change the content/practice ratio), and we should substitute performance support for learning when possible
Much of training and elearning is boring and we can and should do better making it meaningful
That people can be a great source of content, but they sometimes need facilitation
That using some sort of enterprise social platform can be a powerful source for learning, with facilitation and the right culture, but isn’t necessarily a substitute when formal learning is required
That on-the-job learning isn’t necessarily easy to leverage but should be a focus for better outcomes in many cases
Crowds of people have more wisdom than single individuals, when you facilitate the process appropriately
Traditional learning professionals have an opportunity to contribute to an information age approach, with an awareness of the bigger picture
I do like that Will, at the end, argues that we need to be less divisive and I agree. I think Jane was trying to point in new directions, and I think the evidence is clear that L&D needs to change. I think healthy debate helps, we need to have opinions, even strong ones, hopefully without rancor or aspersions. I don’t know quite why Jane’s post triggered such a backlash, but I hope we can come together to advance the field.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:47am</span>
|
In the course of some work with a social business agency, was wondering how to represent the notion of facilitating continual innovation. This representation emerged from my cogitations, and while it’s not quite right, I thought I’d share it as part of Work Out Loud week.
The core is the 5 R’s: Researching the opportunities, processing your explorations by either Representing them or putting them into practice (Reify) and Reflecting on those, and then Releasing them. And of course it’s recursive: this is a release of my representation of some ideas I’ve been researching, right? This is very much based on Harold Jarche’s Seek-Sense-Share model for Personal Knowledge Mastery (PKM). I’m trying to be concrete about different types of activities you might do in the Sense section as I think representations such as diagrams are valuable but very different than active application via prototyping and testing. (And yes, I’m really stretching to keep the alliteration of the R’s. I may have to abandon that. ;)
What was interesting to me was to think of the ways in which we can facilitate around those activities. We shouldn’t assume good research skills, and assist individuals in doing understanding what qualifies as good searches for input and evaluating the hits, as well as establishing and filtering existing information streams.
We can and should also facilitate the representations of interpretations, whether informing properties of good diagrams, prose, or other representation forms. We can help make the processes of representation clear as well. Similarly, we can develop understanding of useful experimentation approaches, and how to evaluate the results.
Finally, we can communicate the outcomes of our reflections, and collaborate on all these activities whether research, representation, reification (that R is a real stretch), and reflection. As I’m doing here, soliciting feedback.
I do believe there’s a role for L&D to look at these activities as well, and ‘training’ isn’t the solution. Here the role is very much facilitation. It’s a different skill set, yet a fundamental contribution to the success of the organization. If you believe, like I do, that the increasing rate of change means innovation is the only sustainable differentiator for success, then this role is crucial and it’s one I think L&D has the opportunity to take on. Ok, those are my thoughts, what are yours?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 04:47am</span>
|
A few days of in-person collaboration and interaction with my colleagues (plus the VIP treatment) was exactly what I needed—an escape from answering e-mails, scheduling articles, and hounding people to meet deadlines.
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:14am</span>
|
A significant discrepancy exists between the educational credentials employees are pursuing, and the credentials managers want employees to have, according to a joint research study conducted by the University of Phoenix and EdAssist.
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:12am</span>
|
According to the national "Millennial Mindset Study" of 1,200 employed Millennials (ages 18 to 33) conducted by online training platform Mindflash, the "lack of company support for training and development" is the No. 1 most surprising aspect of work in the "real world."
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:11am</span>
|
Some 85 percent of employees report they are losing sleep due to work-related stress, according to a survey of 714 U.S. workers by global talent mobility consulting firm Lee Hecht Harrison.
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:09am</span>
|
Find out about the latest advances in training technology.
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:07am</span>
|
How do you know if you were a productive leader today? And how do you plan better so you get the important things done tomorrow? Here are three rules for higher performance.
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:06am</span>
|
What do you do when you have a superstar who is going to leave? If at all possible, make it a long goodbye to facilitate a good succession plan, comprehensive knowledge transfer, and a smooth transition.
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:04am</span>
|
The latest training industry mergers, acquisitions, partnerships, and more.
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:03am</span>
|
The latest products and services launching in the training industry.
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:02am</span>
|
High Impact Conversations are specific, deliberate conversations meant to foster commitment to action, greater workplace alignment, and shared accountability.
Training Magazine Articles
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 16, 2015 09:01am</span>
|