Blogs
As I’ve been working with the Foundation over the past 6 months I’ve had the occasion to review a wide variety of elearning, more specifically in the vocational and education space, but my experience mirrors that from the corporate space: most of it isn’t very good. I realize that’s a harsh pronouncement, but I fear that it’s all too true; most of the elearning I see will have very little impact. And I’m becoming ever more convinced that what I’ve quipped in the past is true:
Quality design is hard to distinguish from well-produced but under-designed content.
And here’s the thing: I’m beginning to think that this is not just a problem with the vendors, tools, etc., but that it’s more fundamental. Let me elaborate.
There’s a continual problem of bad elearning, and yet I hear people lauding certain examples, awards are granted, tools are touted, and processes promoted. Yet what I see really isn’t that good. Sure, there are exceptions, but that’s the problem, they’re exceptions! And while I (and others, including the instigators of the Serious eLearning Manifesto) try to raise the bar, it seems to be an uphill fight.
Good learning design is rigorous. There’re some significant effort just getting the right objectives, e.g. finding the right SME, working with them and not taking what they say verbatim, etc. Then working to establish the right model and communicating it, making meaningful practice, using media correctly. At the same time, successfully fending off the forces of fable (learning styles, generations, etc).
So, when it comes to the standard tradeoff - fast, cheap, or good, pick two - we’re ignoring ‘good’. And I think a fundamental problem is that everyone ‘knows’ what learning is, and they’re not being astute consumers. If it looks good, presents content, has some interaction, and some assessment, it’s learning, right? NOT! But stakeholders don’t know, we don’t worry enough about quality in our metrics (quantity per time is not a quality metric), and we don’t invest enough in learning.
I’m reminded of a thesis that says medicos reengineered their status in society consciously. They went from being thought of ‘quacks’ and ‘sawbones’ to an almost reverential status today by a process of making the process of becoming a doctor quite rigorous. I’m tempted to suggest that we need to do the same thing.
Good learning design is complex. People don’t have predictable properties as does concrete. Understanding the necessary distinctions to do the right things is complex. Executing the processes to successfully design, refine, and deliver a learning experience that leads to an outcome is a complicated engineering endeavor. Maybe we do have to treat it like rocket science.
Creating learning should be considered a highly valuable outcome: you are helping people achieve their goals. But if you really aren’t, you’re perpetrating malpractice! I’m getting stroppy, I realize, but it’s only because I care and I’m concerned. We have got to raise our game, and I’m seriously concerned with the perception of our work, our own knowledge, and our associated processes.
If you agree, (and if you don’t, please do let me know in the comments), here’s my very serious question because I’m running out of ideas: how do we get awareness of the nuances of good learning design out there?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:16am</span>
|
Yesterday I went off about how learning design should be done right and it’s not easy. In a conversation two days ago, I was talking to a group that was supporting several initiatives in adaptive learning, and I wondered if this was a good idea.
Adaptive learning is desirable. If learners come from different initial abilities, learn at different rates, and have different availability, the learning should adapt. It should skip things you already know, work at your pace, and provide extra practice if the learning experience is extended. (And, BTW, I’m not talking learning styles). And this is worthwhile, if the content you are starting with is good. And even then, is it really necessary. To explain, here’s an analogy:
I have heard it said that the innovations for the latest drugs should be, in many cases, unnecessary. The extra costs (and profits for the drug companies) wouldn’t be necessary. The claim is that the new drugs aren’t any more effective than the existing treatments if they were used properly. The point being that people don’t take the drugs as prescribed (being irregular, missing, not continuing past the point they feel better, etc), and if they did the new drugs wouldn’t be as good. (As a side note, it would appear that focusing on improving patient drug taking protocols would be a sound strategy, such as using a mobile app.) This isn’t true in all cases, but even in some it makes a point.
The analogy here is that using all the fancy capabilities: tarted up templates for simple questions, 3D virtual worlds, even adaptive learning, might not be needed if we did better learning design! Now, that’s not to say we couldn’t add value with using the right technology at the right points, but as I’ve quipped in the past: if you get the design right, there are lots of ways to implement it. And, as a corollary, if you don’t get the design right, it doesn’t matter how you implement it.
We do need to work on improving our learning design, first, rather than worrying about the latest shiny objects. Don’t get me wrong, I love the shiny objects, but that’s with the assumption that we’re getting the basics right. That was my assumption ’til I hit the real world and found out what’s happening. So let’s please get the basics right, and then worry about leveraging the technology on top of a strong foundation.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:15am</span>
|
I recently opined that good learning design was complex, really perhaps close to rocket science. And I suggested that a consequent problem was that the nuances are subtle. It occurs to me that perhaps discussing some example problems will help make this point more clear.
Without being exhaustive, there are several consistent problems I see in the elearning content I review:
The wrong focus. Seriously, the outcomes for the class aren’t meaningful! They are about information or knowledge, not skill. Which leads to no meaningful change in behavior, and more importantly, in outcomes. I don’t want to learn about X, I want to learn how to do X!
Lack of motivating introductions. People are expected to give a hoot about this information, but no one helps them understand why it’s important? Learners should be assisted to viscerally ‘get’ why this is important, and helped to see how it connects to the rest of the world. Instead we get some boring drone about how this is really important. Connect it to the world and let me see the context!
Information focused or arbitrary content presentations. To get the type of flexible problem-solving organizations need, people need mental models about why and how to do it this way, not just the rote steps. Yet too often I see arbitrary lists of information accompanied by a rote knowledge test. As if that’s gonna stick.
A lack of examples, or trivial ones. Examples need to show a context, the barriers, and how the content model provides guidance about how to succeed (and when it won’t). Instead we get fluffy stories that don’t connect to the model and show the application to the context. Which means it’s not going to support transfer (and if you don’t know what I’m talking about, you’re not ready to be doing design)!
Meaningless and insufficient practice. Instead of asking learners to make decisions like they will be making in the workplace (and this is my hint for the first thing to focus on fixing), we ask rote knowledge questions. Which isn’t going to make a bit of difference.
Nonsensical alternatives to the right answer. I regularly ask of audiences "how many of you have ever taken a quiz where the alternatives to the right answer are so silly or dumb that you didn’t need to know anything to pass?" And everyone raises their hand. What possible benefit does that have? It insults the learner’s intelligence, it wastes their time, and it has no impact on learning.
Undistinguished feedback. Even if you do have an alternative that’s aligned with a misconception, it seems like there’s an industry-wide conspiracy to ensure that there’s only one response for all the wrong answers. If you’ve discriminated meaningful differences to the right answer based upon how they go wrong, you should be addressing them individually.
The list goes on. Further, any one of these can severely impact the learning outcomes, and I typically see all of these!
These are really just the flip side of the elements of good design I’ve touted in previous posts (such as this series). I mean, when I look at most elearning content, it’s like the authors have no idea how we really learn, how our brains work. Would you design a tire for a car without knowing how one works? Would you design a cover for a computer without knowing what it looks like? Yet it appears that’s what we’re doing in most elearning. And it’s time to put a stop to it. As a first step, have a look at the Serious eLearning Manifesto, specifically the 22 design principles.
Let me be clear, this is just the surface. Again, learning engineering is complex stuff. We’ve hardly touched on engagement, spacing, and more. This may seem like a lot, but this is really the boiled-down version! If it’s too much, you’re in the wrong job.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:15am</span>
|
So yesterday, I went off on some of the subtleties in elearning that are being missed. This is tied to last weeks posts about how we’re not treating elearning seriously enough. And part of it is in the knowledge and skills of the designers, but it’s also in the process. Or, to put it another way, we should be using steps and tools that align with the type of learning we need. And I don’t mean ADDIE, though not inherently.
So what do I mean? For one, I’m a fan of Michael Allen’s Successive Approximation Model (SAM), which iterates several times (tho’ heuristically, and it could be better tied to a criterion). Given that people are far less predictable than, say, concrete, fields like interface design have long known that testing and refinement need to be included. ADDIE isn’t inherently linear, certainly as it has evolved, but in many ways it makes it easy to make it a one-pass process.
Another issue, to me, is to structure the format for your intermediate representations so that make it hard to do aught but come up with useful information. So, for instance, in recent work I’ve emphasized that a preliminary output is a competency doc that includes (among other things) the objectives (and measures), models, and common misconceptions. This has evolved from a similar document I use in (learning) game design.
You then need to capture your initial learning flow. This is what Dick & Carey call your instructional strategy, but to me it’s the overall experience of the learner, including addressing the anxieties learners may feel, raising their interest and motivation, and systematically building their confidence. The anxieties or emotional barriers to learning may well be worth capturing at the same time as the competencies, it occurs to me (learning out loud ;).
It also helps if your tools don’t interfere with your goals. It should be easy to create animations that help illustrate models (for the concept) and tell stories (for examples). These can be any media tools, of course. The most important tools are the ones you use to create meaningful practice. These should allow you to create mini-, linear-, and branching-scenarios (at least). They should have alternative feedback for every wrong answer. And they should support contextualizing the practice activity. Note that this does not mean tarted up drill and kill with gratuitous ‘themes’ (race cars, game shows). It means having learners make meaningful decisions and act on them in ways like they’d act in the real world (click on buttons for tech, choose dialog alternatives for interpersonal interactions, drag tools to a workbench or adjust controls for lab stuff, etc).
Putting in place processes that only use formal learning when it makes sense, and then doing it right when it does make sense, is key to putting L&D on a path to relevancy. Cranking out courses on demand, focusing on measures like cost/butt/seat, adding rote knowledge quizzes to SME knowledge dumps, etc are instead continuing down the garden path to oblivion. Are you ready to get scientific and strategic about your learning design?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:15am</span>
|
I’ve been on a rant about learning design for a few posts, but I ended up talking about how creating a better process is part of getting strategic. The point was that our learning design has to embody what’s know about how we learn, e.g. a learning engineering. And it occurs to me that getting our processes structured to align with how we work is part of a bigger picture of how our strategies have to similarly be informed.
So, as part of the L&D Revolution I argue we need to have, I’m suggesting organizations, and consequently L&D, need to be aligned with how we think, work, and learn. So our formal learning initiatives (used only when they are really needed) need to be based upon learning science. And performance support similarly needs to reflect how we process information, and, importantly, things we don’t do well and need support for. The argument for informal and social learning similarly comes from our natural approaches, and similarly needs to provide facilitation for where things can and do go wrong.
And, recursively, L&D’s processes need to similarly reflect what we do, and don’t, do well. So, just as we should provide support for performers to execute, communicate, collaborate, and continue to improve (why L&D needs to become P&D), we need to make sure that we practice what we preach. And a scientific method means we need to measure what we’re doing, not just efficiency, but effectiveness.
It’s time that L&D gets out of the amateur approach, and starts getting professional. Which means understanding the organization’s goals, rejecting requests that are nonsensical, examining what we do, using technology in sophisticated ways (*cough* content engineering *cough*), and more. We need to know about how we think, work, and learn, and apply it to what we do. We’re about people, after all, so it’s about time we understood the science in our field, and quit thinking that our existing practices (largely from an industrial age) are inherently relevant. We must be scrutable, and that means we must scrutinize. Time to get to work.
#itashare
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:15am</span>
|
In a discussion last week, I suggested that the things I was excited about included wearables. Sure enough, someone asked if I’d written anything about it, and I haven’t, much. So here are some initial thoughts.
I admit I was not a Google Glass ‘Explorer’ (and now the program has ended). While tempted to experiment, I tend not to spend money until I see how the device is really going to make me more productive. For instance, when the iPad was first announced, I didn’t want one. Between the time it was announced and it was available, however, I figured out how I’d use it produce, not just consume. I got one the first day it came out. By the same rationale, I got a Palm Pilot pretty early on, and it made me much more effective. I haven’t gotten a wrist health band, on the other hand, though I don’t think they’re bad ideas, just not what I need.
The point being that I want to see a clear value proposition before I spend my hard earned money. So what am I thinking in regards to wearables? What wearables do I mean? I am talking wrist devices, specifically. (I may eventually warm up to glasses as well, when what they can do is more augmented reality than they do now.) Why wrist devices? That’s what I’m wrestling with, trying to conceptualize what is a more intuitive assessment.
Part of it, at least, is that it’s with me all the time, but in an unobtrusive way. It supports a quick flick of the wrist instead of pulling out a whole phone. So it can do that ‘smallest info’ in an easy way. And, more importantly, I think it can bring things to my attention more subtly than can a phone. I don’t need a loud ringing!
I admit that I’m keen on a more mixed-initiative relationship than I currently have with my technology. I use my smartphone to get things I need, and it can alert me to things that I’ve indicated I’m interested in, such as events that I want an audio alert for. And of course, for incoming calls. But what about for things that my systems come up with on their own? This is increasingly possible, and again desirable. Using context, and if a system had some understanding of my goals, it might be able to be proactive. So imagine you’re out and about, and your watch reminds you that while you were here you wanted to pick up something nearby, and provide the item and location. Or to prep for that upcoming meeting and provide some minimal but useful info. Note that this is not what’s currently on offer, largely. We already have geofencing to do some, but right now for it to happen you largely have to pull out your phone or have it give a largely intrusive noise to be heard from your pocket or purse.
So two things about this: one why the watch and not the phone, and the other, why not the glasses? The watch form factor is, to me, a more accessible interface to serve as a interactive companion. As I suggested, pulling it out of the pocket, turning it on, going through the security check (even just my fingerprint), adds more of an overhead than I necessarily want. If I can have something less intrusive, even as part of a system and not fully capable on it’s own, that’s OK. Why not glasses? I guess it’s just that they seem more unnatural. I am accustomed to having information on my wrist, and while I wear glasses, I want them to be invisible to me. I would love to have a heads-up display at times, but all the time would seem to get annoying. I’ll stretch and suggest that the empirical result that most folks have stopped wearing them most of the time bears up my story.
Why not a ring, or a pendant, or? A ring seems to have too small an interface area. A pendant isn’t easily observable. On my wrist is easy for a glance (hence, watches). Why not a whole forearm console? If I need that much interface, I can always pull out my phone. Or jump to my tablet. Maybe I will eventually will want an iBracer, but I’m not yet convinced. A forearm holster for my iPhone? Hmmm…maybe too geeky.
So, reflecting on all this, it appears I’m thinking about tradeoffs of utility versus intrusion. A wrist devices seems to fit a sweet spot in an ecosystem of tech for the quick glance, the pocket access, and then various tradeoffs of size and weight for a real productivity between tablets and laptops.
Of course, the real issue is whether there’s sufficient information available through the watch that it makes a value proposition. Is there enough that’s easy to get to that doesn’t require a phone? Check the temperature? Take a (voice) note? Get a reminder, take a call, check your location? My instinct is that there is. There are times I’d be happy to not have to take my phone (to the store, to a party) if I could take calls on my wrist, do minimal note taking and checking, and navigating. For the business perspective, also have performance support whether push or pull. I don’t see it for courses, but for just-in-time… And contextual.
This is all just thinking aloud at this point. I’m contemplating the iWatch but don’t have enough information as of yet. And I may not feel the benefits outweigh the costs. We’ll see.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:14am</span>
|
My colleague Charles Jennings recently posted on the value of autonomous learning (worth reading!), sparked by a diagram provided by another ITA colleague, Jane Hart (that I also thought was insightful). In Charles’ post he also included an IBM diagram that triggered some associations.
So, in IBM’s diagram, they talked about: the access phase where learning is separate, the integration where learning is ‘enabled’ by work, and the on-demand phase where learning is ‘embedded’. They talked about ‘point solutions’ (read: courses) for access, then blended models for integration, and dynamic models for on demand. The point was that the closer to the work that learning is, the more value.
However, I was reminded of Fits & Posner’s model of skill acquisition, which has 3 phases of cognitive, associative, and autonomous learning. The first, cognitive, is when you benefit from formal instruction: giving you models and practice opportunities to map actions to an explicit framework. (Note that this assumes a good formal learning design, not rote information and knowledge test!) Then there’s an associative stage where that explicit framework is supported in being contextualized and compiled away. Finally, the learner continues to improve through continual practice.
I was initially reminded of Norman & Rumelhart’s accretion, restructuring, and tuning learning mechanisms, but it’s not quite right. Still, you could think of accreting the cognitive and explicitly semantic knowledge, then restructuring that into coarse skills that don’t require as much conscious effort, until it becomes a matter of tuning a finely automated skill.
This, to me, maps more closely to 70:20:10, because you can see the formal (10) playing a role to kick off the semantic part of the learning, then coaching and mentoring (the 20) support the integration or association of the skills, and then the 70 (practice, reflection, and personal knowledge mastery including informal social learning) takes over, and I mapped it against a hypothetical improvement curve.
Of course, it’s not quite this clean. While the formal often does kick off the learning, the role of coaching/mentoring and the personal learning are typically intermingled (though the role shifts from mentee to mentor ;). And, of course, the ratios in 70:20:10 are only a framework for rethinking investment, not a prescription about how you apply the numbers. And I may well have the curve wrong (this is too flat for the normal power law of learning), but I wanted to emphasize that the 10 only has a small role to play in moving performance from zero to some minimal level, that mentoring and coaching really help improve performance, and that ongoing development requires a supportive environment.
I think it’s important to understand how we learn, so we can align our uses of technology to support them in productive ways. As this suggests, if you care about organizational performance, you are going to want to support more than the course, as well as doing the course right. (Hence the revolution. :)
#itashare
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:14am</span>
|
My background is in learning technology design, leveraging a deep background (read: Ph.D.) in cognition, and long experience with technology. I have worked as a learning game designer/developer, researcher and academic, project leader on advanced applications, program manager, and more. More recently, I’ve been working with many different types of organizations including not-for-profits, Fortune 500, small-medium enterprises, government, education, and more with workshops, project deliverables, strategic consulting, writing, and more.
This crosses formal learning, mobile learning, serious games, performance support, content systems, social and informal learning, and more. I reckon there’s a benefit to 30+ years of being fortunate enough to be at the cutting edge, and I work hard to maintain currency with developments in learning, technology, and organizational needs.I like to think I’m pretty good at it, and I am for hire. I’ve worked in most of the obvious ways: fixed-fee deliverables when we can define a scope, hourly/daily rates when it’s uncertain, and on a retainer basis to keep my expertise ‘on tap’.
What I have not done, is work on a commission basis. That is, I don’t push someone’s solution on you for a cut of the action. I’ve cut a few such deals in the early days, particularly for long-term clients/partners, but to no avail. And I’m fine with that. In fact, that’s now my stance.
There are reasons for this both principled, and pragmatic. On principle, I want to remain able to say Solution X is the best, as I truly believe it to be true, and not be swayed that Solution Y would offer me some financial reward. I believe my independence is in my clients best interests. This holds true in systems, vendors, individuals, whatever. I want you to be able to trust what I say, and know that it’s coming from my expertise, not some other influence. When you get my expert opinion, it is to your needs alone. And, pragmatically, I’m not a salesperson, it’s not in my nature.
I also don’t design solutions and outsource development. I have trusted partners I can work with, so I don’t need solicitations to show me your skills. I’m sure your team is awesome too, but I don’t want to take the time to vet your abilities, and I certainly wouldn’t represent them without scrutiny. When I have needs, I’ll reach out.
So I welcome hearing from you when you want some guidance on reviewing your processes, assessing or designing your strategy, ramping up your capabilities, considering markets, looking for collateral, and more. This is as true for vendors as other organizations. But don’t expect me to learn about your solutions (particularly for free), and flog them to others. Fair enough? Am I missing something?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:14am</span>
|
(in the future)
Dr. Melik: You mean there was no deep fat? No steak or cream pies? Or hot fudge?
Dr. Agon: Those were thought to be unhealthy, precisely the opposite of what we now know to be true.
In Woody Allen’s Sleeper about someone who wakes up in the future, one of the jokes is that all the things we thought were true are turned on their head. I was talking with my colleague Jay Cross in terms of why we’re not seeing more uptake of the opportunities for L&D to move out of the industrial age, and one of the possible explanations is satisfaction with the status quo. And I was reminded of several articles I’ve read that support the value of rethinking.
In Sweden, on principled reasons they decided that the model of prosecuting the prostitute wasn’t fair. She was, they argued, a victim. Instead, they decided to punish the solicitation of the service, a complete turn around from the previous approach. It has reduced sex trafficking, for one outcome. Other countries are now looking at their model and some have already adopted it.
In Portugal, which was experiencing problems with drugs, they took the radical step of decriminalizing them, and setting them up with treatment. While it’s not a panacea, it has not led to the massive increase in usage that was expected. Which is a powerful first step. It may be a small step toward undoing some of the misconceptions about addiction which may be emerging.
And in Denmark there was an experiment in doing away with road signs. The premise was that folks with regulations will trust the regulations to work. If you remove them, they have to go back to assessing the situation, and that they’ll drive safer. It appears, indeed, to be the case.
I could go on: the food pyramid, cubicles… more and more ideas are being shown to be misguided if not out and out wrong. And the reason I raise this is to suggest that complacency about anything, accepting the received wisdom, may not be helpful. Patti Shank recently wrote about the burden of having an informed opinion, and I think we need to take ownership of our beliefs, and I think that’s right.
There are lots of approaches to get out of the box: appreciative inquiry, positive deviance, double loop learning, the list goes on. Heck, there’s even the silly and overused but apt cliche about the definition of insanity. The point being that regular reflection is part of being a learning organization. You need to be looking at what you’re doing, what others are doing, and what others are saying. Continual improvement is part of the ongoing innovation that today’s organization needs to thrive.
Yes, we can’t query everything, but if we have an area of responsibility, e.g. in charge of learning strategy, we owe it to know what alternative approach might be. And we certainly should be looking at what we’re doing and what impact it’s having. Measuring just efficiency instead of impact? Being an order taker and not investigating the real cause? Not looking at the bigger picture? Ahem. I am positing, via the Revolution, that L&D isn’t doing near what it could and should, and we are via the Manifesto that what it is doing, it is doing badly. So, what’s the response? I’ve done the research to suggest that there’s a need for a rethink, and I’m trying to foster it. So where do we go from here? Where do you go from here? Steak, anyone?
#itashare
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:13am</span>
|
I’ve been interested in process, so I attended this month’s Bay Area Learning Design Meetup that showcased LinkedIn’s work on Agile using Scrum for learning design. It was very nice of them to share the specifics of their process, and while there were more details than time permitted to cover, it was a great beginning to understand the differences.
Basically, a backlog is kept of potential new projects. They’re prioritized and a subset is chosen as the basis of the sprint and put on the board. Then for two weeks they work on hitting the elements on the board, with a daily standup meeting to present where they’re at and synchronize. At the end they demo to the stakeholders and reflect. As part of the reflection, they’re supposed to change something for the next iteration.
There’re different roles: a project owner who’s the ‘client’ in a sense (and a relation to who may be the end client). There is a Scrum master who’s responsible for facilitating the group through the steps, and then the team, which should be small but at least represent all the necessary roles to execute whatever is being accomplished.
When I asked about scope, they said that they’ve found they can do about 100 story points (which are empirical) in a sprint, and they may distribute that across some elearning, some job aids, whatever. They didn’t seem too eager to try to quantify that relative to other known metrics, and I understand it’s hard, particularly in the time they had. Here’s the Mindmap:
Allen Interactions also discussed their SAM project (which I know and like), but the mind map didn’t match too well to their usual diagram (only briefly shown at the end), and I ran out of time trying to remedy. It’s better just to look at the diagram ;).
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:13am</span>
|
There’s a considerable gap between what we can be doing, and what we are doing. When you look at what’s out there, we see that there are several way in which we fall short of the mark. While there are many dimensions that could be considered, for the sake of simplicity let’s characterize the two important ones as effectiveness of our learning and the engagement of the experience. And I want to characterize where we are and where we could be, and the gaps we need to bridge.
If we map the space, we see that the lower left is the space of low engagement and low effectiveness. Too much elearning resides there. Now, to be fair, it’s easy to add engaging media and production values, so the space of typical elearning does span from low to high engagement. Moving up the diagram, however, towards increasing effectiveness, is an area that’s less populated. The red line separates the undesirable areas from the space we’d like to start hitting, where we begin to have some modicum of both effectiveness and engagement, moving towards the upper right. This space is relatively sparsely populated, I’m afraid. And while there are instances of content that do increase the effectiveness, there’s little that really hits the ultimate goal, the holy grail, with a fully integrated effective and engaging experience is achieved.
How do we move in the right direction? I’ve talked before about trying to hit the sweet spot of maximal effectiveness within pragmatic constraints. Certainly from an effectiveness standpoint, you should be looking at the components of the Serious eLearning Manifesto. To get effective learning, you need a number of elements, for instance:
meaningful practice: practice aligned with the real world task
contextualized practice: learning across contexts that support transfer
sustained practice: sufficient and increasingly challenging practice to develop the skills to the necessary level
spaced practice: practice spread out over time (brains need sleep to learn more than a certain threshold)
real world consequences providing feedback coupled with scaffolded reflection
model-based guidance: the best guide for practice is a conceptual basis (not rote information)
appropriate examples: that show the concepts being applied in context
Some of these elements, also contribute to engagement, as well as others. Components include:
learning-centered contexts: problems learners recognize as important
learner-centered contexts: problems learners want to solve
emotionally engaging introductions: hooking learners in viscerally as well as cognitively
adapted challenge: ramping up the challenge appropriately to avoid both boredom and frustration
unpredictability: maintaining the learner’s attention through surprise
meaningfulness: learners playing roles they want to be in
drama and/or humor
The integration of these elements was the underlying premise behind Engaging Learning, my book on integrating effectiveness and engagement, specifically on making meaningful practice, e.g. serious games. Serious games are one way to achieve this end, by contextualizing practice as decisions in a meaningful environment and using a game engine to adapt the challenge and providing essentially unlimited practice.
Other approaches achieve much of this effectiveness in different ways. Branching scenarios are powerful approximations to this by showing consequences in context but with limited replay, and so are constructivist and problem-based learning pedagogies. This may sound daunting, but with practice, and some shortcuts, this is doable.
For example, Socratic Arts has a powerful online pedagogy that leverages media and a constructivist pedagogy in a relatively simple framework. The learner is given ‘assignments’ that mirror real world tasks, via emails or videos of characters playing roles such as a boss. The outputs required similarly mimic work products you might find in this area. Scaffolding is available in a couple of ways. For one, there are guidelines about Videos of experts and documents are available as resources, to support the learner in getting the best outcome. While it’s low on fancy visual design, it’s effective because it’s closely aligned to the needed skills post-learning. And the cognitive challenge is pitched at the right level to engage the intellect, if not the aesthetics. This is a cost-effective balance.
The work I did with the Wadhwani Foundation hit a slightly different spot in trying to get to the grail. I didn’t have the ability to work quite as tightly with the SMEs from the get-go, and we didn’t have the ability to simulate the hands-on tasks as well as we’d like, but we did our best to infer real tasks and used low-tech simulations and scenarios to make it effective. We did use more media, animations and contextualized videos, to make the experience more engaging and effective as well.
The point being that we can start making learning more effective and engaging in practical ways. We need to make it effective, or why bother? We should make it engaging, to optimize the outcomes and not insult our learners. And we can. So why don’t we?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:13am</span>
|
Last week I wrote about Rethinking, how we might want and need to revise our approaches, and showed a few examples of folks thinking out of the box and upending our cherished viewpoints. I discovered another one (much closer to ‘home’) and tweeted it out, only to get a pointer to another. I think it’s worth looking at these two examples that help make the point that maybe it’s time for a rethink of some of our cherished beliefs and practices.
The first was a pointer from a conversation I had with the proprietor of an organization with a new mobile-based coaching engine. Among the things touted was that much of our thinking about feedback appears to be wrong. I was given a reference and found an article that indeed upends our beliefs about the benefits of feedback.
The article investigates performance reviews, and finds them lacking, citing one study that found:
"a meta-analysis of 607 studies of performance evaluations and concluded that at least 30% of the performance reviews ended up in decreased employee performance."
30% decrease performance? And that’s not including the others that are just neutral. That’s a pretty bad outcome! Worse, the Society for Human Resource Management is cited as stating "90% of performance appraisals are painful and don’t work". In short, one of the most common performance instruments is flawed.
As a consequence of tweeting this out, a respondent pointed to another article that he was reminded of. This one upends the notion that we’re good at rating others’ behavior: "research has demonstrated that each of us is a disturbingly unreliable rater of other people’s performance". That is, 360 degree reviews, manager reviews, etc., are fundamentally based upon review by others, and they’re demonstrably bad at it. The responses given have reliable biases that makes the data invalid.
As a consequence, again, we cannot continue as we are:
"we must first stop, take stock, and admit to ourselves that the systems we currently use to reveal our people only obscure them"
This is just like learning styles: there’s no reliable data that it works, and the measurement instrument used is flawed. In short, one of the primary tools for organizational improvement is fundamentally broken. We’re using industrial age tools in an information age.
What’s a company to do? The first article quoted Josh Bersin when saying "companies need to focus very heavily on ‘collaboration, professional development, coaching and empowering people to do great things’". This is the message of the Internet Time Alliance and an outflow of the Coherent Organization model and the L&D Revolution. There are alternatives that are more respectful of how people really think, work, and learn, and consequently more effective. Are you ready to rethink?
#itashare
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:12am</span>
|
In a recent chat, a colleague I respect said the word ‘engagement’ was anathema. This surprised me, as I’ve been quite outspoken about the need for engagement (for one small example, writing a book about it!). It may be that the conflict is definitional, for it appeared that my colleague and another respondent viewed engagement as bloating the content, and that’s not what I mean at all. So I thought I lay out what I mean when I say engaging, and why I think it’s crucial.
Let’s be clear what I don’t mean. If you think by engagement it’s adding in extra stuff, we’re using a very different definition of engagement. It’s not about tarting up uninteresting stuff with ‘fun’ (e.g. racing themed window dressing on knowledge test). It’s not about putting in unnecessary unrelated imagery, sounds, or anything else. Heck, the research of Dick Mayer at UCSB shows this actually hinders learning!
So what do I mean? For one thing, stripping away any ‘nice to have’ or unnecessary info. Lean is engaging! You have to focus on what really will help the learners, and in ways that they get. And they do. And then help them in the ‘in the ways they get’ bit.
You need contextualized practice. Engaging is making the context meaningful to the learners. You need contextualization (e.g research by John Bransford on anchored cognition), but arbitrary contextualization isn’t as good as intrinsically interesting contexts. This isn’t window dressing, since you need to be doing it anyway, but do it. And in a minimal style (as de Saint-Exupery said: "Perfection is finally attained not when there is no longer anything to add but when there is no longer anything to take away…").
You want compelling examples. We know that examples lead to better learning (ala, for instance John Sweller’s work on cognitive load), but again, making them meaningful to the learners is critical. This isn’t window dressing, as we need them, but they’re better if they’re well told as intrinsically interesting stories.
Finally, we need to introduce the learning. Too often we do this in ways that the learner doesn’t get the WIIFM (What’s In It For Me). Learners learn better when they’re emotionally open to the content instead of uninterested. This may be a wee bit more, but we can account for this by getting rid of the usual introductory stuff. And it’s worth it.
Now, let’s be clear, this is for when we’ve deemed formal learning as necessary. When the audience is practitioners who know what they need and why it’s important, then giving them ‘just the facts’, performance support, is sufficient. But if it’s new skills they need, when you need a learning experience, then you want to make it engaging. Not extrinsically, but intrinsically. And that’s not more in quantity, it’s not bloated, it’s more in quality, in minimalism for content and maximal for immersion.
Engaging learning is a good thing, a better thing than not, the right thing. I’m hoping it’s just definitional, because I can’t see the contrary argument unless there’s confusion over what I mean. Anyone?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:12am</span>
|
I recently wrote about wearables, where I focused on form factor and information channels. An article I recently read talked about a guy who builds spy gear, and near the end he talked about some things that started me thinking about an extension of that for all mobile, not just wearables. The topic is sensors.
In the article, he talks about how, in the future, glasses could detect whether you’ve been around bomb-making materials:
"You can literally see residue on someone if your glasses emit a dozen different wavelengths of microlasers that illuminate clothing in real time and give off a signature of what was absorbed or reflected."
That’s pretty amazing, chemical spectrometry on the fly. He goes on to talk about distance vision:
"Imagine you have a pair of glasses, and you can just look at a building 50 feet away, 100 feet away, and look right through the building and see someone moving around."
Now, you might nor might not like what he’s doing with that, but imagine applying it elsewhere: identifying where people are for rescue, or identifying materials for quality control.
Heck, I’d find it interesting just to augment the camera with infrared and ultraviolet: imagine being able to use the camera on your phone or glasses to see what’s happening at night, e.g. wildlife (tracking coyotes or raccoons, and managing to avoid skunks!). Night vision, and seeing things that fluoresce under UV would both be really cool additions.
I’d be interested too in having them able to work to enlarge as well, bring small things to light like a magnifying glass or microscope.
It made me think about all the senses we could augment. I was thinking about walking our dogs, and how their olfactory life is much richer than ours. They are clearly sensing things beyond our olfactory capabilities, and it would be interesting to have some microscent detectors that could track faint traces to track animals (or know which owner is not adequately controlling a dog, ahem). They could potentially serve as smoke or carbon monoxide detectors also.
Similarly, auditory enhancement: could we hear things fainter than our ears detect, or have them serve as a stethoscope? Could we detect far off cries for help that our ears can’t? Of course, that could be misused, too, to eavesdrop on conversations. Interesting ethical issues come in.
And we’ve already heard about the potential to measure one’s movement, blood pressure, pulse, temperature, and maybe even blood sugar, to track one’s health. The fit bands are getting smarter and more capable.
There is the possibility for other things we personally can’t directly track: measuring ambient temperatures quantitatively, and air pressure are both already possible and in some devices. The thermometer could be a health and weather guide, and a barometer/altimeter would be valuable for hiking in addition to weather.
The combination of reporting these could be valuable too. Sensor nets, where the data from many micro sensors are aggregated have interesting possibilities. Either with known combinations, such as aggregating temperature and air pressure help with weather, or machine learning where for example we include sensitive motion detectors, and might be able to learn to predict earthquakes like supposedly animals can. Sounds too could be used to triangulate on cries for help, and material detectors could help locate sources of pollution.
We’ve done amazing things with technology, and sensors are both shrinking and getting more powerful. Imagine having sensors scattered about your body in various wearables and integrating that data in known ways, and agreeing for anonymous aggregation for data mining. Yes, there are concerns, but benefits too.
We can put these together in interesting ways, notifications of things we should pay attention to, or just curiosity to observe things our natural senses can’t detect. We can open up the world in powerful ways to support being more informed and more productive. It’s up to us to harness it in worthwhile ways.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:12am</span>
|
Someone tweeted about their mobile learning credo, and mentioned the typical ‘mlearning is elearning, extended’ view. Which I rejected, as I believe mlearning is much more (and so should elearning be). And then I thought about it some more. So I’ll lay out my thinking, and see what you think.
I have been touting that mLearning could and should be focused, as should P&D, on anything that helps us achieve our goals better. Mobile, paper, computers, voodoo, whatever technology works. Certainly in organizations. And this yields some interesting implications.
So, for instance, this would include performance support and social networks. Anything that requires understanding how people work and learn would be fair game. I was worried about whether that fit some operational aspects like IT and manufacturing processes, but I think I’ve got that sorted. UI folks would work on external products, and any internal software development, but around that, helping folks use tools and processes belongs to those of us who facilitate organizational performance and development. So we, and mlearning, are about any of those uses.
But the person, despite seeming to come from an vendor to orgs, not schools, could be talking about schools instead, and I wondered whether mLearning for schools, definitionally, really is about only supporting learning. And I can see the case for that; that mlearning in education is about using mobile to help people learn, not perform. It’s about collaboration, for sure, and tools to assist.
Note I’m not making the case for schools as they are, a curriculum rethink definitely needs to accompany using technology in schools in many ways. Koreen Pagano wrote this nice post separating Common Core teaching versus assessment, which goes along with my beliefs about the value of problem solving. And I also laud Roger Schank‘s views, such as the value (or not) of the binomial theorem as a classic example.
But then, mobile should be a tool in learning, so it can work as a channel for content, but also for communication, and capture, and compute (e.g. the 4C’s of mlearning). And the emergent capability of contextual support (the 5th C, e.g. combinations of the first four). So this view would argue that mlearning can be used for performance support in accomplishing a meaningful task that’s part of an learning experience.
That would take me back to mlearning being more than just mobile elearning, as Jason Haag has aptly separated. Sure, mobile elearning can be a subset of mlearning, but not the whole picture. Does this make sense to you?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:11am</span>
|
I’m a big fan of the mantras of (variously) ‘show your work‘ or ‘working out loud‘. I think that the notion of showing what you’re doing helps other work with you to make it better, or learn from you if you do well. This contributes to the success of the ‘coherent organization‘, where information flows in ways that are aligned with the goals of the organization. But I want to extend it a bit.
Let me use an analogy: remember when your teacher asked you to ‘show your work’? It wasn’t just the product, but the intermediate steps, and I’m sure that’s what Jane Bozarth is implying. But it’s too easy for people to think it’s about making your work product available instead of one that’s marked up with the underlying thinking. In user interface work it was known as ‘design rationale’, where you kept a track of the assumptions and decisions along the way.
I’ve termed this ‘cognitive annotation’ at various points (what, me create new phrases?). And it’s really important for a number of reasons:
people can learn from what you thought and did
people can provide feedback if they notice any prior thoughts
and new people can avoid having a team revisit decisions
I have a couple of guilty pleasures that make this point quite clearly. Lee Child is a writer who has a character called Jack Reacher (hence the movie, pretty entertaining despite the wrong physical type to play the lead role). This character is ex-military police and quite capable in challenging situations. What makes this series more than usually interesting is that the character regularly outlines the situation, the thinking behind it, and the resulting actions taken. In doing so, it’s often a format like "most people think X, but because of Y, Z is a better choice".
Another place this shows up is the recently finished television series Burn Notice. In this case, a ‘burned’ spy is forced to freelance, and regularly gets in situations where again, the conventional wisdom is debunked. With a regular approach of a ‘sting’, along with a dry humor and some larger-than-life characters, it’s fun, and interesting because of the underlying thinking that explains the choices made.
Granted, neither of these are situations I have any interest in being in, but it’s a nice twist and makes the stories more interesting.
In the real world, it can be hard to share underlying thinking if it’s a Miranda organization, but the benefits suggest that the effort to achieve a culture where such openness is ‘safe’ is a worthwhile endeavor. At least, that’s my thinking.
#itashare
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:11am</span>
|
Well, some more travels are imminent, so I thought I’d update you on where the Quinnovation road show would be on tour this spring:
March 9-10 I’ll be collaborating with Sarah Gilbert and Nick Floro to deliver ATD’s mLearnNow event in Miami on mobile
On the 11th I’ll be at a private event talking the Revolution to a select group outside Denver
Come the 18th I’ll be inciting the revolution at the ATD Golden Gate chapter meeting here in the Bay Area
On the 25th-27th, I’ll be in Orlando again instigating at the eLearning Guild’s Learning Solutions conference
May 7-8 I’ll be kicking up my heels about the revolution for the eLearning Symposium in Austin
I’ll be stumping the revolution at another vendor event in Las Vegas 12-13
And June 2-3 I’ll be myth-smashing for ATD Atlanta, and then workshopping game design
So, if you’re at one of these, do come up and introduce yourself and say hello!
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:11am</span>
|
There’s been quite a bit of flurry about Design Thinking of late (including the most recent #lrnchat), and I’m trying to get my around what’s unique about it. The wikipedia entry linked above helps clarify the intent, but is there any there there?
It helps to understand that I’ve been steeped in design approaches since at least the 80’s. Herb Simon’s Sciences of the Artificial argued, essentially, that design is the quintessential human activity. And my grad school experience was in a research lab focused on interface design. Process was critical, and when I was subsequently teaching interface design, I was tracking new initiatives like situated design and participatory design, anthropological efforts designed to get closer to the ‘customer’.
In addition to being somewhat obsessive about learning how people learn, and as a confirmed geek continually exploring new technology, I also got interested in design processes beyond interface design. As my passion was designing learning technology solutions to meet real needs, I explored other design approaches to look for universals. Along the way I looked at industrial, graphic, architectural, software, and other design disciplines. I also read the psychological research on our cognitive limitations and design approaches. (I made a small bit of my career on bringing the advances in HCI, which was more advanced in process, to ed tech.)
The reason I mention this is that the elements of Design Thinking: being open minded, diverging before converging, using teams, empathy for the customer, etc, all strike me as just good design. It’s not obvious to me whether it gets into the nuances (e.g. the steps in the Wikipedia article don’t allow me to see whether they do things like ensure that everyone takes time to brainstorm on their own before coming together; an important step to prevent groupthink), but at the granularity I’ve seen, it seems to be quite good. You mean everyone isn’t already both aware of and using this? Apparently not.
So in that respect, Design Thinking is a win. If adding a label to a systematized compendium of good practices will raise awareness, I’m all for it. And I’m willing to have my consciousness raised that there’s more to it, because as a proponent of design, I’m glad to see that folks are taking steps to help design get better and will be thrilled if it adds something new.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:11am</span>
|
A couple of weeks ago, I was riffing on sensors: how mobile devices are getting equipped with all sorts of new sensors and the potential for more and what they might bring. As part of that discussion was a brief mention of sensor nets, how aggregating all this data could be of interest too. And low and behold, a massive example was revealed last week.
The context was the ‘spring forward’ event Apple held where they announced their new products. The most anticipated one was the Apple Watch (which was part of the driving behind my post on wearables), the new iConnected device for your wrist. The second major announcement was their new Macbook, a phenomenally thin new laptop with some amazing specs on weight and screen display, as well as some challenging tradeoffs.
One announcement that was less noticed was the announcement of a new research endeavor, but I wonder if it isn’t the most game-changing element of them all. The announcement was ResearchKit, and it’s about sensor nets.
So, smartphones have lots of sensors. And the watch will have more. They can already track a number of parameters about you automatically, such as your walking. There can be more, with apps that can ask about your eating, weight, or other health measurements. As I pointed out, aggregating data from sensors could do things like identify traffic jams (Google Maps already does this), or collect data like restaurant ratings.
What Apple has done is to focus specifically on health data from their HealthKit, and partner with research hospitals. What they’re saying to scientists is "we’ll give you anonymized health data, you put it to good use". A number of research centers are on board, and already collecting data about asthma and more. The possibility is to use analytics that combine the power of large numbers with a bunch of other descriptive data to be able to investigate things at scale. In general, research like this is hard since it’s hard to get large numbers of subjects, but large numbers of subjects is a much better basis for study (for example, the China-Cornell-Oxford Project that was able to look at a vast breadth of diet to make innovative insights into nutrition and health).
And this could be just the beginning: collecting data en masse (while successfully addressing privacy concerns) can be a source of great insight if it’d done right. Having devices that are with you and capable of capturing a variety of information gives the opportunity to mine that data for expected, and unexpected, outcomes.
A new iDevice is always cool, and while it’s not the first smart watch (nor was the iPhone the first smartphone, the iPad not the first tablet, nor the iPod the first music play), Apple has a way of making the experience compelling. Like with the iPad, I haven’t yet seen the personal value proposition, so I’m on the fence. But the ability to collect data in a massive way that could support ground-breaking insights and innovations in medicine? That has the potential for affecting millions of people around the world. Now that is impact.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:10am</span>
|
The other day, I was wondering about the possibilities of removing mandatory courses. Ok, maybe not mandated compliance, but any others. And then a colleague took it further, and I like it. So what are we talking about?
I was thinking that, if you give people a meaningful mission (ala Dan Pink’s Drive), the learner (assuming reasonable self-learning skills, a separate topic), they would take responsibility for the learning they needed. We could have courses around, or maybe await their desires and point them to outside resources, etc, unless it’s specifically internal. That is, we become much more pull (from the user) than push (from us).
However, my colleague Mark Britz took it further. He argued that instead of not making them go, instead we’d charge them what it cost to provide the learning! That is, if folks wanted training or webinars or…, they’d pay for the privilege. As he put it, if requests for elearning, being cautious about signing up, etc happened: "I couldn’t be happier!"
His point is that it would drive people to more workflow learning, more social and shared learning, etc. And that’s a good thing. I might couple that with some way to make sure they knew how to work, play, and learn well together, but it’s the different view that’s a needed jumpstart.
It’s a refreshing twist on the ‘if we build it it is good’ sort of mentality, and really helps focus the L&D unit on doing things that will significantly improve outcomes for others. If you can make a meaningful impact, people will have to pay for your assistance. You want change? You’ll pay but it’ll be worth it.
If we’re going to kick off a revolution, we need to rethink what we’re about and how we’re doing it. Mark’s upended view is a necessary kick in the status quo to get us to think anew about what we’re doing and why.
I recommend you read his original post.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:10am</span>
|
A couple of times last year, firms with some exciting learning tools approached me to talk about the market. And in both cases, I had to advise them that there were some barriers they’d have to address. That was brought home to me in another conversation, and it makes me worry about the state of our industry.
So the first tool is based upon a really sound pedagogy that is consonant with my activity-based learning approach. The basis is giving learners assignments very much like the assignments they’ll need to accomplish in the workplace, and then resourcing them to succeed. They wanted to make it easy for others to create these better learning designs (as part of a campaign for better learning). The only problem was, you had to learn the design approach as well as the tool. Their interface wasn’t ready for prime time, but the real barrier was getting people to be able to use a new tool. I indicated some of the barriers, and they’re reconsidering (while continuing to develop content against this model as a service).
The second tool supports virtual role plays in a powerful way, having smart agents that react in authentic ways. And they, too, wanted to provide an authoring tool to create them. And again my realistic assessment of the market was that people would have trouble understanding the tool. They decided to continue to develop the experiences as a service.
Now, these are somewhat esoteric designs, though the former should be the basis of our learning experiences, and the latter would be a powerful addition to support a very common and important type of interaction. The more surprising, and disappointing, issue came up with a conversation earlier this year with a proponent of a more familiar tool.
Without being specific (I’ve not received permission to disclose the details in all of the above), this person indicated that when training a popular and fairly straightforward tool, that the biggest barrier wasn’t the underlying software model. I was expecting that too much of training was based upon rote assignments without an underlying model, and that is the case, but instead there was a more fundamental barrier: too many potential users just didn’t have sufficient computer skills! And I’m not talking about programming code, but instead fundamental understandings of files and ‘styles‘ and other core computing elements just were not present in sufficient quantities in these would-be authors. Seriously!
Now I’ve complained before that we’re not taking learning design seriously, but obviously we’re compounded by a lack of fundamental computer skills. Folks, this is elearning, not chalk learning, not chalk talk, not edoing, etc. If you struggle to add new apps on your computer, or find files, you’re not ready to be an elearning developer.
I admit that I struggle to see how folks can assume that without knowledge of design, nor knowledge of technology, that they can still be elearning designers and developers. These tools are scaffolding to allow your designs to be developed. They don’t do design, nor will they magically cover up for lacks of tech literacy.
So, let’s get realistic. Learn about learning design, and get comfortable with tech, or please, please, don’t do elearning. And I promise not to do music, architecture, finance, and everything else I’m not qualified to. Fair enough?
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:10am</span>
|
Tom Wujec gave a discursive and well illustrated talk about how changes in technology were changing industry, ultimately homing in on creativity. Despite a misstep mentioning Kolb’s invalid learning styles instrument, it was entertaining and intriguing.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:10am</span>
|
Michael Furdyk gave an inspiring talk this morning about his trajectory through technology and then five ideas that he thought were important elements in the success of the initiatives he had undertaken. He gave lots of examples and closed with interesting questions about how we might engage learners through badges, mobile, and co-creation.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:09am</span>
|
Juliette LaMontagne closed the Learning Solutions conference with the compelling story of the Breaker project, connecting kids to real world experiences.
Clark
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Nov 22, 2015 05:09am</span>
|