So, I was continuing the campaign for the Revolution, and wanted to expand the audience interaction. I could’ve used the tired ‘turn to your neighbor’ technique, but I had a thought (dangerous, that).  Could it be improved upon? As I may have mentioned, there has been a backlash against ‘brainstorming’. For example, the New York Times had an article about how it didn’t work, saying that if you bring people into a room, and then give them a problem or topic, and then get them to discuss, it won’t work. And they’re right!  Because that is a broken model of brainstorming; it’s a straw man argument. A real model of brainstorming has the individuals thinking about the problem individually beforehand, before you bring them together. When you have them not have a chance to think independently, the first person to speak colors the thoughts of the others, but if people can come up with their own ideas first, then share and improve, it works well.  The room is smarter than the smartest person in the room, as the quote has it, but the caveat is that you have to manage the process right. So how does this relate to the ‘turn to your neighbor’?  It occurred to me that a clear implication was that if you thought to yourself first, before sharing, you’d get a better outcome. And so that’s what I did: I had them think for themselves on the question I presented, then share, and then stop. Now, to be fair, I didn’t have time to ask for all the output, instead I asked who had come up with ‘formal’ for a question on what supports optimal execution, and who came up with facilitating the flow of information as a solution for supporting innovation. So we have practical limits on what we can do with a large audience and a small amount of time.  However, I did ask at the end of the first one whether they thought it worthwhile. And I asked again of a subset of the audience who attended the next day workshop ("Clark Quinn’s workshop on Strategic Elearning is awesome" was a comment, <fist pump>) what they thought. Overall the feedback was that it was an improvement. Certainly the outputs should be better.  One was "energized". The overall take of the large audience and the smaller one was very positive.  It doesn’t take much longer, because it’s easy to do the quick thinking bit (and it’s no easier to get them to stop sharing :), but it’s a lesson and an improved technique all in one! So, now you know that if you see anyone doing just the ‘turn to your neighbor’, they’re not up on the latest research.  Wonder if we can get this to spread?  But continue exploration is a necessary element to improvement, and innovations happen through diligent work and refinement.  Please do try it out and let me know how it goes!  And, of course, even just your thoughts.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:06am</span>
David McCandless gave a graphically and conceptually insightful talk on the power of visualization at the Callidus Cloud Connections.  He demonstrated the power of insight by tapping into the power of our pattern matching cognitive architecture.   
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:05am</span>
Is there an appetite for change in L&D? That was the conversation I’ve had with colleagues lately. And I have to say that that the answer is mixed, at best. The consensus is that most of L&D is comfortably numb. That L&D folks are barely coping with getting courses out on a rapid schedule and running training events because that’s what’s expected and known. There really isn’t any burning desire for change, or willingness to move even if there is. This is a problem. As one commented: "When I work with others (managers etc) they realise they don’t actually need L&D any more". And that’s increasingly true: with tools to do narrated slides, screencasts, and videos in the hands of everyone, there’s little need to have the same old ordinary courses coming from L&D. People can create or access portals to share created and curated resources, and social networks to interact with one another. L&D will become just a part of HR, addressing the requirements - onboarding and compliance - everything else will be self-serve. The sad part of this is the promise of what L&D could be doing. If L&D started facilitating learning, not controlling it, things could go better. If L&D realized it was about supporting the broad spectrum of learning, including self-learning, and social learning, and research and problem-solving and trouble-shooting and design and all the other situations where you don’t know the answer when you start, the possibilities are huge. L&D could be responsible for optimizing execution of the things they know people need to do, but with a broader perspective that includes putting knowledge into the world when possible. And L&D could be also optimizing the ability of the organization to continually innovate. It is this possibility that keeps me going. There’s the brilliant world where the people who understand learning combine with the people who know technology and work together to enable organizations to flourish. That’s the world I want to live in, and as Alan Kay famously said: "the best way to predict the future is to invent it." Can we, please?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:05am</span>
One of the themes I’ve been strumming in presentations is one where we complement what we do well with tools that do well the things we don’t. A colleague reminded me that JCR Licklider wrote of this decades ago (and I’ve similarly followed the premise from the writings of Vannevar Bush, Doug Engelbart, and Don Norman, among others). We’re already seeing this.   Chess has changed from people playing people, thru people playing computers and computers playing computers, to computer-human pairs playing other computer-human pairs. The best competitors aren’t the best chess players or the best programs, but the best pairs, that is the player and computer that best know how to work together. The implications are to stop trying to put everything in the head, and start designing systems that complement us in ways that assure that the combination is the optimized solution to the problem being confronted. Working backwards [], we should decide what portion should be handled by the computer, and what by the person (or team), and then design the resources and then training the humans to use the resources in context to achieve the goals. Of course, this is only in the case of known problems, the ‘optimal execution’ phase of organizational learning. We similarly want to have the right complements to support the ‘continual innovation’ phase as well. What that means is that we have to be providing tools for people to communicate, collaborate, create representations, access and analyze data, and more. We need to support ways for people to draw upon and contribute to their communities of practice from their work teams. We need to facilitate the formation of work teams, and make sure that this process of interaction is provided with just the right amount of friction. Just like a tire, interaction requires friction. Too little and you go skidding out of control. Too much, and you impede progress. People need to interact constructively to get the best outcomes. Much is known about productive interaction, though little enough seems to make it’s way into practice. Our design approaches need to cover the complete ecosystem, everything from courses and resources to tools and playgrounds. And it starts by looking at distributed cognition, recognizing that thinking isn’t done just in the head, but in the world, across people and tools. Let’s get out and start playing instead of staying in old trenches.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:05am</span>
At a recent meeting, one of my colleagues mentioned that increasingly people weren’t throwing away prototypes.  Which prompted reflection, since I have been a staunch advocate for revolutionary prototyping (and here I’m not talking about "the" Revolution ;). When I used to teach user-centered design, the tools for creating interfaces were complex. The mantras were test early, test often, and I advocated Double Double P’s (Postpone Programming, Prefer Paper; an idea I first grabbed from Rob Phillips then at Curtin).  The reason was that if you started building too early in the design phase, you’d have too much invested to throw things away if they weren’t working. These days, with agile programming, we see sprints producing working code, which then gets elaborated in subsequent sprints.  And the tools make it fairly easy to work at a high level, so it doesn’t take too much effort to produce something. So maybe we can make things that we can throw out if they’re wrong. Ok, confession time, I have to say that I don’t quite see how this maps to elearning.  We have sprints, but how do you have a workable learning experience and then elaborate it?  On the other hand, I know Michael Allen’s doing it with SAM and Megan Torrance just had an article on it, but I’m not clear whether they’re talking storyboard, and then coded prototype, or… Now that I think about it, I think it’d be good to document the core practice mechanic, and perhaps the core animation, and maybe the spread of examples.  I’m big on interim representations, and perhaps we’re talking the same thing. And if not, well, please educate me! I guess the point is that I’m still keen on being willing to change course if we’ve somehow gotten it wrong.  Small representations is good, and increasing fidelity is fine, and so I suppose it’s okay if we don’t throw out prototypes often as long as we do when we need to.  Am I making sense, or what am I missing?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:05am</span>
A colleague was describing his journey, and attributed much of his success (rightly) to his core skills: including his creativity. I was resonating with his list until I got to ‘attention to detail’, and it got me to thinking. Attention to detail is good, right?  We want people to sweat the nuances, and I certainly am inspired by folks who do that. But there are times when I don’t want to be responsible for the details. To be sure, these are times when it doesn’t make sense to have me do the details. For example, once I’ve helped a client work out a strategy, the implementation really largely should be on them, and I might take some spot reviews (far better than just helping them start and abandoning them). So I wondered about what the alternative would be. Now the obvious thought is lack of attention to detail, which might initially be negative, but could there be a positive connotation?  What came to me was attention to connections. That is, seeing how what’s being considered might map to a particular conceptual model, or a related field. Seeing how it’s contextualized, and bringing together solutions.    Seeing the forest, not the trees. I’m inclined to think that there are benefits to those who see connections, just as there is a need for those who can plug away at the details.  And it’s probably contextual; some folks will be one in one area and another in another.  For example, there are times I’m too detail oriented (e.g. fighting for conceptual clarity), and times where I’m missing connections (particularly in reading the politics of a situation).  And vice-versa, times when I’m not detail-0riented enough, and very good at seeing connections. They’re probably not ends of a spectrum, either, as I’ve gone away from that in practical matters (hmm, wonder what that implies about the Big 5?). Take introvert and extrovert, from a learning perspective it’s about how well you learn on your own versus how well you learn with others, and you could be good or bad at each or both.  Similarly here, you could be able to do both (as in my colleague, he’s one of the smartest folks I know who is demonstrably innovative and connecting as well as being able to sweat the details whether writing code or composing music). Or maybe this is all a post-hoc justification for wanting to play out at the conceptual frontier, but I’m not going to apologize for that.  It seems to work…
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:04am</span>
I was thinking about how to make meaningful practice, and I had a thought that was tied to some previous work that I may not have shared here.  So allow me to do that now. Ideally, our practice has us performing in ways that are like the ways we perform in the real world.  While it is possible to make alternatives available that represent different decisions, sometimes there are nuances that require us to respond in richer ways. I’m talking about things like writing up an RFP, or a response letter, or creating a presentation, or responding to a live query. And while these are desirable things, they’re hard to evaluate. The problem is that our technology to evaluate freeform text is difficult, let alone anything more complex.  While there are tools like latent semantic analysis that can be developed to read text, it’s complex to develop and  it won’t work on spoken responses , let alone spreadsheets or slide decks (common forms of business communication).  Ideally, people would evaluate them, but that’s not a very scalable solution if you’re talking about mentors, and even peer review can be challenging for asynchronous learning. An alternative is to have the learner evaluate themselves.  We did this in a course on speaking, where learners ultimately dialed into an answering machine, listened to a question, and then spoke their responses.  What they then could do was listen to a model response as well as their response.  Further, we could provide a guide, an evaluation rubric, to guide the learner in evaluating their response in respect to the model response (e.g. "did you remember to include a statement and examples"?). This would work with more complex items, too.  "Here’s a model spreadsheet (or slide deck, or document); how does it compare to yours?"  This is very similar to the types of social processing you’d get in a group, where you see how someone else responded to the assignment, and then evaluate. This isn’t something you’d likely do straight off; you’d probably scaffold the learning with simple tasks first.  For instance, in the example I’m talking about we first had them recognize well- and poorly-structured responses, then create them from components, and finally create them in text before having them call into the answering machine. Even then, they first responded to questions they knew they were going to get before tasks where they didn’t know the questions.  But this approach serves as an enriching practice on the way to live performance. There is another benefit besides allowing the learner to practice in richer ways and still get feedback. In the process of evaluating the model response and using an evaluation rubric, the learner internalizes the criteria and the process of evaluation, becoming a self-evaluator and consequently a self-improving learner.  That is, they use a rubric to evaluate their response and the model response. As they go forward, that rubric can serve to continue to guide as they move out into a performance situation. There are times where this may be problematic, but increasingly we can and should mix media and use technology to help us close the gap between the learning practice and the performance context. We can prompt, record learner answers, and then play back theirs and the model response with an evaluation guide.  Or we can give them a document template and criteria, take their response, and ask them to evaluate theirs and another, again with a rubric.  This is richer practice and helps shift the learning burden to the learner, helping them become self-learners.   I reckon it’s a good thing. I’ll suggest that you consider this as another tool in your repertoire of ways to create meaningful practice. What do you think?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:04am</span>
The following was prompted by a discussion on how education has the potential to be disrupted.  And I don’t disagree, but I don’t see the disruptive forces marshaling that I think it will take.  Some thoughts I lobbed in another forum (lightly edited): Mark Warschauer, in his great book Learning in the Cloud (which has nothing to do with ‘the cloud’), pointed out that there are only 3 things wrong with public education: the curricula, the pedagogy, and the way they use tech; other than that they’re fine. Ahem. And much of what I’ve read about disruption seems flawed in substantial ways. I’ve seen the for-profits institutions, and they’re flawed because even if they did understand learning (and they don’t seem to), they’re handicapped: they have to dance to the ridiculous requirements of accrediting bodies. Those bodies don’t understand why SMEs aren’t a good source of objectives, so the learning goals are not useful to the workplace. It’s not the profit requirement per se, because you could do good learning, but you have to start with good objectives, and then understand the nuances that make learning effective. WGU is at least being somewhat disruptive on the objectives. MOOCs don’t yet have a clear business model; right now they’re subsidized by either the public institutions, or biz experiments.  And the pedagogy doesn’t really scale well: their objectives also tend to be knowledge based, and to have a meaningful outcome they’d need to be application based, and you can’t really evaluate that at scale (unless you get *really* nuanced about peer review, but even then you need some scrutiny that just doesn’t scale.). For example, just because you learn to do AI programming doesn’t mean you’re ready to be an AI programmer.  That’s the xMOOCs, the cMOOCs have their own problems with expectations around self-learning skills.  Lovely dream, but it’s not the world I live in, at least yet. As for things like the Khan Academy, well, it’s a nice learning adjunct, and they’re moving to a more complete learning experience, but they’re still largely tied to the existing curricula (e.g. doing what Jonassen railed against: the problems we give kids in schools bear no relation to the problems they’ll face in the real world). The totally missed opportunity across all of this is the possibility of layering 21C skills across this in a systematic and developable way. If we could get a better curricula, focused on developing applicable skills and meta-skills, with a powerful pedagogy, in a pragmatically deliverable way… Lots of room for disruption, but it’s really a bigger effort than I’ve yet seen someone willing to take. And yet, if you did it right, you’d have an essentially unassailable barrier to entry: real learning done at scale. However, I’m inclined to think that it’s more plausible in the countries who increasingly ‘get’ that higher ed is an investment in the future of a country, and are making it free, and make it a ‘man on the moon’ program. I’m willing, even eager to be wrong on this, so please let me know what you think!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:04am</span>
I end up seeing a lot of different elearning. And, I have to say, despite my frequent disparagement, it’s usually well-written, the problem seems to be in the starting objectives.  But compared to learning that really has an impact: medical, flight, or military training for instance, it seems woefully under-practiced. So, I’d roughly (and generously) estimate that the ratio is around 80:20 for content: practice.  And, in the context of moving from ‘getting it right’ to ‘not getting it wrong’, that seems woefully inadequate.  So, two questions: do we just need more practice, or do we also have too much content. I’ll put my money on the latter, that is: both. To start, in most of the elearning I see (even stuff I’ve had a role in, for reasons out of my control), the practice isn’t enough.  Of course, it’s largely wrong, being focused on reciting knowledge as opposed to making decisisions, but there just isn’t enough.  That’s ok if you know they’ll be applying it right away, but that usually isn’t the case.  We really don’t scaffold the learner from their initial capability, through more and more complex scenarios, until they’re at the level of ability we want.  Where they’re performing the decisions they need to be making in the workplace with enough flexibility and confidence, and with sufficient retention until it’s actually needed.  Of course, it shouldn’t be the event model, and that practice should be spaced over time.  Yes, designing practice is harder than just delivering content, but it’s not that much harder to develop more than just to develop some. However, I’ll argue we’re also delivering too much content.  I’ve suggested in the past that I can rewrite most content to be 40% - 60% less than it starts (including my own; it takes me two passes).  Learners appreciate it.  We want a concise model, and some streamlined examples, but then we should get them practicing.  And then let the practice drive them to the content.  You don’t have to prepackage it as much, either; you can give them some source materials that they’ll be motivated to use, and even some guidance (read: job aids) on how to perform. And, yes, this is a tradeoff: how do we find a balance that both yields the outcomes we need but doesn’t blow out the budget?  It’s an issue, but I suggest that, once you get in the habit, it’s not that much more costly.  And it’s much more justifiable, when you get to the point of actually measuring your impact.  Which many orgs aren’t doing yet.  And, of course, we should. The point is that I think our ratio should really be 50:50 if not 20:80 for content:practice.  That’s if it matters, but if it doesn’t why are you bothering? And if it does, shouldn’t it be done right?  What ratios do you see? And what ratios do you think makes sense?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:04am</span>
The past two weeks, I’ve been on the road (hence the paucity of posts).  And they’ve been great opportunities to engage around interesting topics, but also have provided some learning opportunities (ahem).  The title of this post, by the way, came from m’lady, who was quoting what a senior Girl Scout said was the biggest lesson she learned from her leader, "to embrace Plan B" ;). So two weeks ago I was visiting a client working on upping their learning game. This is a challenge in a production environment, but as I discussed many times in posts over the second half of 2014 and some this year, I think there are some serious actions that can be taken.  What is needed are better ways to work with SMEs, better constraints around what makes useful content, and perhaps most importantly what makes meaningful interaction and practice.  I firmly believe that there are practical ways to get serious elearning going without radical change, though some initial hiccups will be experienced. This past week I spoke twice. First on a broad spectrum of learning directions to a group that was doing distance learning and wanted to take a step back and review what they’d been doing and look for improvement opportunities. I covered deeper learning, social learning, meta-learning, and more. Then I went beyond and talked about 70:20:10, measurement, games and simulations, mlearning, the performance ecosystem, and more.  I then moved on to a separate (and delightful) event in Vancouver to promote the Revolution. It was the transition between the two events last week that threw me. So, Plan A was to fly back home on Tuesday, and then fly on to Vancouver on Wed morning.   But, well, life happened.  All my flights were delayed (thanks, American) on my flight there and back to the first engagement, and both of the first flights such that I missed the connection. On the way out I just got in later than I expected (leading to 4.5 hours sleep before the long and detailed presentation).  But on the way back, I missed the last connecting flight home.  And this had several consequences. So, instead of spending Tuesday night in my own bed, and repacking for the next day, I spent the night in the Dallas/Fort Worth airport.  Since they blamed it on weather (tho’ if the incoming flight had been on time, it might’ve gotten out in time to avoid the storm), they didn’t have any obligation to provide accommodation, but there were cots and blankets available. I tried to pull into a dark and quiet place, but most of the good ones were taken already. I found a boarding gate that was out of the way, but it was bright and loud.  I gave up after an hour or so and headed off to another area, where I found a lounge where I could pull together a couple of armchairs and managed to doze for 2.5 or so hours, before getting up and on the hunt for some breakfast.  Lesson: if something’s not working, change! I caught a flight back home in just enough time to catch the next one up to Vancouver. The problem was, I wasn’t able to swap out my clothes, so I was desperately in need of some laundry.  Upon arriving, I threw one of the shirts, socks, etc into a sink and gave them a wash and hung them up. (I also took a shower, which was not only a necessity after a rough night but a great way to gather myself and feel a bit more human).  The next morning, as I went to put on the shirt, I found a stain!  I couldn’t get up in front of all those people with a stained shirt. Plan B was out the door. Also, the other shirt had acquired one too!  Plan C on the dust heap. Now what?  Fortunately, my presentation was in the afternoon, but I needed to do something. So I went downstairs and found a souvenir shop in the hotel, but the shirts were all a wee bit too loud.  I didn’t really want to pander to the crowd quite so egregiously. I asked at the hotel desk if there was a place I could buy a shirt within walking distance, and indeed there was.  I was well and truly on Plan D by this time. So I hiked on out to a store and fortunately found another shirt I could throw on.  Lesson: keep changing! I actually made the story part of my presentation.  I made the point that just like in my case, organizations need not only optimal execution of the plans, but then also the ability to innovate if the plan isn’t working.  And L&D can (and should) play a role in this.  So, help your people be prepared to create and embrace Plan B (and C and…however many adaptations they need to have). And one other lesson for me: be better prepared for tight connections to go awry!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 05:03am</span>
Displaying 11581 - 11590 of 43689 total records
No Resources were found.