Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

In an insightful article, Ken Majer (full disclosure, a boss of mine many years ago) has written about the need to have the right culture before executing strategy.  And this strikes me as a valuable contribution to thinking about effective change in the transformation of L&D in the Revolution. I have argued that you can get some benefits from the Revolution without having an optimized culture, but you’re not going to tap into the full potential. Revising formal learning to be truly effective by aligning to how we learn, adding in performance support in ways that augment our cognitive limitations, etc, are all going to offer useful outcomes. I think the optimal execution stuff will benefit, but  the ability to truly tap into the network for the continual innovation requires making it safe and meaningful to share. If it’s not safe to Show Your Work, you can’t capitalize on the benefits. What Ken is talking about here is ensuring you have values and culture in alignment with the vision and mission.  And I’ll go further and say that in the long term, those values have to be about valuing people and the culture has to be about working and learning together effectively.  I think that’s the ultimate goal when you really want to succeed: we know that people perform best when given meaningful work and are empowered to pursue it. It’s not easy, for sure.  You need to get explicit about your values and how those manifest in how you work. You’ll likely find that some of the implicit values are a barrier, and they’ll require conscious work to address. The change in approach on the part of management and executives and the organizational restructuring that can accompany this new way of working isn’t going to happen overnight, and change is hard.  But it is increasingly, and will be, a business necessity. So too for the move to a new L&D. You can start working in these ways within your organization, and grow it.  And you should. It’s part of the path, the roadmap, to the revolution.  I’m working on more bits of it, trying to pull it together more concretely, but it’s clear to me that one thread (and as already indicated in the diagrams that accompany the book) is indeed a path to a more enabling culture. In the long term, it will be uplifting, and it’s worth getting started on now.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:56am</span>
I’m recognizing that there’s an opportunity to provide more support for implementing the Revolution. So I’ve been thinking through what sort of process might be a way to go about making progress. Given that the core focus in on aligning with how we think, work, and learn (elements we’re largely missing), I thought I’d see whether that could provide a framework. Here’s my first stab, for your consideration: Assess: here we determine our situation. I’m working on an evaluation instrument that covers the areas and serves as a guide to any gaps between current status and possible futures, but the key element is to ascertain where we are. Learn: this step is about reviewing the conceptual frameworks available, e.g. our understandings of how we think, work and learn. The goal is to identify possible directions to move in detail and to prioritize them. The ultimate outcome is our next step to take, though we may well have a sequence queued up. Initiate: after choosing a step, here’s where we launch it. This may not be a major initiative.  The principle of ‘trojan mice‘ suggests small focused steps, and there are reasons to think small steps make sense.   We’ll need to follow the elements of successful change, with planning, communicating, supporting, rewarding, etc. Guide: then we need to assess how we’re doing and look for interventions needed. This involves knowing what the change should accomplish, evaluating to see if it’s occurring, and implementing refinements as we go.  We shouldn’t assume it will go well, but instead check and support. Nurture: once we’ve achieved a stable state, we want to nurture it on an ongoing basis. This may be documenting and celebrating the outcome, replicating elsewhere, ensuring persistence and continuity, and returning to see where we are now and where we should go next. Obviously, I’m pushing the ALIGN acronym (as one does), as it helps reinforce the message.   Now to put in place tools to support each step.  Feedback solicited!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:56am</span>
Last Friday’s #GuildChat was on Agile Development.  The topic is interesting to me, because like with Design Thinking, it seems like well-known practices with a new branding. So as I did then, I’ll lay out what I see and hope others will enlighten me. As context, during grad school I was in a research group focused on user-centered system design, which included design, processes, and more. I subsequently taught interface design (aka Human Computer Interaction or HCI) for a number of years (while continuing to research learning technology), and made a practice of advocating the best practices from HCI to the ed tech community.  What was current at the time were iterative, situated, collaborative, and participatory design processes, so I was pretty  familiar with the principles and a fan. That is, really understand the context, design and test frequently, working in teams with your customers. Fast forward a couple of decades, and the Agile Manifesto puts a stake in the ground for software engineering. And we see a focus on releasable code, but again with principles of iteration and testing, team work, and tight customer involvement.  Michael Allen was enthused enough to use it as a spark that led to the Serious eLearning Manifesto. That inspiration has clearly (and finally) now moved to learning design. Whether it’s Allen’s SAM or Ger Driesen’s Agile Learning Manifesto, we’re seeing a call for rethinking the old waterfall model of design.  And this is a good thing (only decades late ;).  Certainly we know that working together is better than working alone (if you manage the process right ;), so the collaboration part is a win. And we certainly need change.  The existing approaches we too often see involve a designer being given some documents, access to a SME (if lucky), and told to create a course on X.  Sure, there’re tools and templates, but they are focused on making particular interactions easier, not on ensuring better learning design. And the person works alone and does the design and development in one pass. There are likely to be review checkpoints, but there’s little testing.  There are variations on this, including perhaps an initial collaboration meeting, some SME review, or a storyboard before development commences, but too often it’s largely an independent one way flow, and this isn’t good. The underlying issue is that waterfall models, where you specify the requirements in advance and then design, develop, and implement just don’t work. The problem is that the human brain is pretty much the most complex thing in existence, and when we determine a priori what will work, we don’t take into account the fact that like Heisenberg what we implement will change the system. Iterative development and testing allows the specs to change after initial experience.  Several issues arise with this, however. For one, there’s a question about what is the right size and scope of a deliverable.  Learning experiences, while typically overwritten, do have some stricture that keeps them from having intermediately useful results. I was curious about what made sense, though to me it seemed that you could develop your final practice first as a deliverable, and then fill in with the required earlier practice, and content resources, and this seemed similar to what was offered up during the chat to my question. The other one is scoping and budgeting the process. I often ask, when talking about game design, how to know when to stop iterating. The usual (and wrong answer) is when you run out of time or money. The right answer would be when you’ve hit your metrics, the ones you should set before you begin that determine the parameters of a solution (and they can be consciously reconsidered as part of the process).  The typical answer, particularly for those concerned with controlling costs, is something like a heuristic choice of 3 iterations.  Drawing on some other work in software process, I’d recommend creating estimates, but then reviewing them after. In the software case, people got much better at estimates, and that could be a valuable extension.  But it shouldn’t be any more difficult to estimate, certainly with some experience, than existing methods. Ok, so I may be a bit jaded about new brandings on what should already be good practice, but I think anything that helps us focus on developing in ways that lead to quality outcomes is a good thing.  I encourage you to work more collaboratively, develop and test more iteratively, and work on discrete chunks. Your stakeholders should be glad you did.  
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:55am</span>
One of my arguments for the L&D revolution is the role that L&D could be playing.  I believe that if L&D were truly enabling optimal execution as well as facilitating continual innovation (read: learning), then they’d be as critical to the organization as IT. And that made me think about how this role would differ. To be sure, IT is critical.  In today’s business, we track our business, do our modeling, run operations, and more with IT.  There is plenty of vertical-specific software, from product design to transaction tracking, and of course more general business software such as document generation, financials, etc.  So how does L&D be as ubiquitous as other software?  Several ways. First, formal learning software is really enterprise-wide.  Whether it’s simulations/scenarios/serious games, spaced learning delivered via mobile, or user-generated content (note: I’m deliberately avoiding the LMS and courses ;), these things should play a role in preparing the audience to optimally execute and being accessed by a large proportion of the audience.  And that’s not including our tools to develop same. Similarly, our performance support solutions - portals housing job aids and context-sensitive support - should be broadly distributed.  Yes, IT may own the portals, but in most cases they are not to be trusted to do a user- and usage-centered solution.  L&D should be involved in ensuring that the solutions both articulate with and reflect the formal learning, and are organized by user need not business silo. And of course the social network software - profiles and locators as well as communication and collaboration tools - should be under the purview of L&D. Again, IT may own them or maintain them, but the facilitation of their use, the understanding of the different roles and ensuring they’re being used efficiently, is a role for L&D. My point here is that there is an enterprise-wide category of software, supporting learning in the big sense (including problem-solving, research, design, innovation), that should be under the oversight of L&D.  And this is the way in which L&D becomes more critical to the enterprise.  That it’s not just about taking people away from work and doing things to them before sending them back, but facilitating productive engagement and interaction throughout the workflow.  At least at the places where they’re stepping outside of the known solutions, and that is increasingly going to be the case.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:55am</span>
Last week, I wrote about a process to follow in moving forward on the L&D Revolution. The first step is Assess, and I’ve been thinking about what that means.   So here, let me lay out some preliminary thoughts. The first level are the broad categories.  As I’m talking about aligning with how we think, work, and learn, those are the three top areas where I feel we fail to recognize what’s known about cognition, individually and together. As I mentioned yesterday, I’m looking at how we use technology to facilitate productivity in ways specifically focused on helping people learn. But let me be clear, here I’m talking about the big picture of learning - problem-solving, design, research, innovation, etc - as they call fall under the category of things we don’t know the answer to when we begin. I started with how we think. Too often we don’t put information in the world when we can, yet we know that all our thinking isn’t in our head.  So we can ask : Are you using performance consulting? Are you taking responsibility for resource development? Are you ensuring the information architecture for resources is user-focused? The next area is working, and here the revelation is that the best outcomes come from people working together.  Creative friction, when done in consonance with how we work together best, is where the best solutions and the best new ideas will come from. So you can look at: Are people communicating? Are people collaborating? Do you have in place a learning culture? Finally, with learning, as the area most familiar to L&D, we need to look at whether we’re applying what’s known about making learning work.  We should start with Serious eLearning, but we can go farther.  Things to look at include: Are you practicing deeper learning design? Are you designing engagement into learning? Are you developing meta-learning? In addition to each of these areas, there are cross-category issues.  Things to look at for each include: Do you have infrastructure? What are you measuring? All of these areas have nuances underneath, but at the top level these strike me as the core categories of questions.  This is working down to a finer grain than I looked at in the book (c.f. Figure 8.1), though that was a good start at evaluating where one is. I’m convinced that the first step for change is to understand where you are (before the next step, Learn, about where you could be).  I’ve yet to see many organizations that are in full swing here, and I have persistently made the case that the status quo isn’t sufficient.  So, are you ready to take the first step to assess where you are?  
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:55am</span>
At DevLearn next week, I’ll be talking about content systems in session 109.  The point is that instead of monolithic content, we want to start getting more granular for more flexible delivery. And while there I’ll be talking about some of the options on how, here I want to make the case about why, in a simplified way. As an experiment (gotta keep pushing the envelope in a myriad of ways), I’ve created a video, and I want to see if I can embed it.  Fingers crossed.  Your feedback welcome, as always.  
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:54am</span>
Today I attended David Pogue’s #DevLearn Keynote.  And, as a DevLearn ‘official blogger’, I was expected to mindmap it (as I regularly do). So, I turn on my iPad and have had a steady series of problems. The perils of living in a high tech world. First, when I opened my diagramming software, OmniGraffle, it doesn’t work. I find out they’ve stopped supporting this edition! So, $50 later (yes, it’s almost unconscionably dear) and sweating out the download ("will it finish in time"), I start prepping the mindmap.  Except the way it does things are different. How do I add break points to an arrow?!?  Well, I can’t find a setting, but I finally explore other interface icons and find a way. The defaults are different, but manage to create a fairly typical mindmap.  Phew. So, I export to Photos and open WordPress. After typing in my usual insipid prose, I go to add the image. And it starts, and fails.  I try again, and it’s reliably failing. I reexport, and try again. Nope. I get the image over to my iPhone to try it there, to no avail. I’ve posted the image to the conference app, but it’s not going to appear here until I get back to my room and my laptop.  Grr.  Oh well, that’s life in this modern world, eh?           
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:54am</span>
David Pogue addressed the DevLearn audience on Learning Disruption. In a very funny and insightful presentation, he ranged from the Internet of Things, thru disintermediation and wearables, pointing out disruptive trends. He concluded by talking about the new generation and the need to keep trying new things. 
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:54am</span>
Connie Yowell gave a passionate and informing presentation on the driving forces behind digital badges.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:54am</span>
Adam Savage gave a thoughtful, entertaining, and ultimately moving talk about how Art and Science are complementary components of what makes us human. He continued telling stories that kept us laughing while learning, and ended on a fabulous note about being willing to be vulnerable as a person and a parent.  Truly a great keynote.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:54am</span>
To close off the DevLearn conference, Natalie Panek (@nmpanek) told of her learning journey to be a space engineer with compelling stories of challenging experiences.  With an authentic and engaging style, she helped inspire us to keep learning.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:54am</span>
At the recent DevLearn conference, David Kelly spoke about his experiences with the Apple Watch.  Because I don’t have one yet, I was interested in his reflections.  There were a number of things, but what came through for me (and other reviews I’ve read) is that the time scale is a factor. Now, first, I don’t have one because as with technology in general, I don’t typically acquire anything in particular until I know how it’s going to make me more effective.  I may have told this story before, but for instance I didn’t wasn’t interested in acquiring an iPad when they were first announced ("I’m not a content consumer"). By the time they were available, however, I’d heard enough about how it would make me more productive (as a content creator), that I got one the first day it was available. So too with the watch. I don’t get a lot of notifications, so that isn’t a real benefit.   The ability to be navigated subtly around towns sounds nice, and to check on certain things.  Overall, however, I haven’t really found the tipping-point use-case.  However, one thing he said triggered a thought. He was talking about how it had reduced the amount of times he accessed his phone, and I’d heard that from others, but here it struck a different cord. It made me realize it’s about time frames. I’m trying to make useful conceptual distinctions between devices to try to help designers figure out the best match of capability to need. So I came up with what seemed an interesting way to look at it. This is similar to the way I’d seen Palm talk about the difference between laptops and mobile, I was thinking about the time you spent in using your devices.  The watch (a wearable)  is accessed quickly for small bits of information.  A pocketable (e.g. a phone) is used for a number of seconds up to a few minutes.  And a tablet tends to get accessed for longer uses (a laptop doesn’t count).  Folks may well have all 3, but they use them for different things. Sure, there are variations, (you can watch a movie on a phone, for instance; phone calls could be considerably longer), but by and large I suspect that the time of access you need will be a determining factor (it’s also tied to both battery life and screen size). Another way to look at it would be the amount of information you need to make a decision about what to do, e.g. for cognitive work. Not sure this is useful, but it was a reflection and I do like to share those. I welcome your feedback!
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:53am</span>
At the recent DevLearn, Donald Clark talked about AI in learning, and while I largely agreed with what he said, I had some thoughts and some quibbles. I discussed them with him, but I thought I’d record them here, not least as a basis for a further discussion. Donald’s an interesting guy, very sharp and a voracious learner, and his posts are both insightful and inciteful (he doesn’t mince words ;). Having built and sold an elearning company, he’s now free to pursue what he believes and it’s currently in the power of technology to teach us. As background, I was an AI groupie out of college, and have stayed current with most of what’s happened.  And you should know a bit of the history of the rise of Intelligent Tutoring Systems, the problems with developing expert models, and current approaches like Knewton and Smart Sparrow. I haven’t been free to follow the latest developments as much as I’d like, but Donald gave a great overview. He pointed to systems being on the verge of auto parsing content and developing learning around it.  He showed an example, and it created questions from dropping in a page about Las Vegas.  He also showed how systems can adapt individually to the learner, and discussed how this would be able to provide individual tutoring without many limitations of teachers (cognitive bias, fatigue), and can not only personalize but self-improve and scale! One of my short-term problems was that the questions auto-generated were about knowledge, not skills. While I do agree that knowledge is needed (ala VanMerriënboer’s 4CID) as well as applying it, I think focusing on the latter first is the way to go. This goes along with what Donald has rightly criticized as problems with multiple-choice questions. He points out how they’re largely used as knowledge test, and I agree that’s wrong, but while there are better practice situations (read: simulations/scenarios/serious games), you can write multiple choice as mini-scenarios and get good practice.  However, it’s as yet an interesting research problem, to me, to try to get good scenario questions out of auto-parsing content. I naturally argued for a hybrid system, where we divvy up roles between computer and human based upon what we each do well, and he said that is what he is seeing in the companies he tracks (and funds, at least in some cases).  A great principle. The last bit that interested me was whether and how such systems could develop not only learning skills, but meta-learning or learning to learn skills. Real teachers can develop this and modify it (while admittedly rare), and yet it’s likely to be the best investment. In my activity-based learning, I suggested that gradually learners should take over choosing their activities, to develop their ability to become self-learners.  I’ve also suggested how it could be layered on top of regular learning experiences. I think this will be an interesting area for developing learning experiences that are scalable but truly develop learners for the coming times. There’s more: pedagogical rules, content models, learner models, etc, but we’re finally getting close to be able to build these sorts of systems, and we should be  aware of what the possibilities are, understanding what’s required, and on the lookout for both the good and bad on tap.  So, what say you?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:53am</span>
For the past 6 months, Learnnovators has been hosting a series of posts I’ve done on Deeper eLearning Design that goes through the elements beyond traditional ID.  That is, reflecting on what’s known about how we learn and what that implies for the elements of learning. Too often, other than saying we need an objective and practice (and getting those wrong), we talk about ‘content’.  Basically, we don’t talk enough about the subtleties. So here I’ve been getting into the nuances of each element, closing with an overview of changes that are implied for processes: 1. Deeper eLearning Design: Part 1 - The Starting Point: Good Objectives 2. Deeper eLearning Design: Part 2 - Practice Makes Perfect 3. Deeper eLearning Design: Part 3 - Concepts 4. Deeper eLearning Design: Part 4 - Examples 5. Deeper eLearning Design: Part 5 - Emotion 6. Deeper eLearning Design: Part 6 - Putting it All Together I’ve put into these posts my best thinking around learning design. The final one’s been posted, so now I can collect the whole set  here for your convenience. And don’t forget the Serious eLearning Manifesto!  I hope you find this useful, and welcome your feedback.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:53am</span>
One of the ways I’ve been thinking about the role mobile can play in design is thinking about how our brains work, and don’t.  It came out of both mobile and the recent cognitive science for learning workshop I gave at the recent DevLearn.  This applies more broadly to performance support in general, so I though I’d share where my thinking is going. To begin with, our cognitive architecture is demonstrably awesome; just look at your surroundings and recognize your clothing, housing, technology, and more are the product of human ingenuity.  We have formidable capabilities to predict, plan, and work together to accomplish significant goals.  On the flip side, there’s no one all-singing, all-dancing architecture out there (yet) and every such approach also has weak points. Technology, for instance, is bad at pattern-matching and meaning-making, two things we’re really pretty good at.  On the flip side, we have some flaws too. So what I’ve done here is to outline the flaws, and how we’ve created tools to get around those limitations.  And to me, these are principles for design: So, for instance, our senses capture incoming signals in a sensory store.  Which has interesting properties that it has almost an unlimited capacity, but for only a very short time. And there is no way all of it can get into our working memory, so what happens is that what we attend to is what we have access to.  So we can’t recall what we perceive accurately.  However, technology (camera, microphone, sensors) can recall it all perfectly. So making capture capabilities available is a powerful support. Similar, our attention is limited, and so if we’re focused in one place, we may forget or miss something else.  However, we can program reminders or notifications that help us recall important events that we don’t want to miss, or draw our attention where needed. The limits on working memory (you may have heard of the famous 7±2, which really is &lt;5) mean we can’t hold too much in our brains at once, such as interim results of complex calculations.  However, we can have calculators that can do such processing for us. We also have limited ability to carry information around for the same reasons, but we can create external representations (such as notes or scribbles) that can hold those thoughts for us.  Spreadsheets, outlines, and diagramming tools allow us to take our interim thoughts and record them for further processing. We also have trouble remembering things accurately. Our long term memory tends to remember meaning, not particular details. However, technology can remember arbitrary and abstract information completely. What we need are ways to look up that information, or search for it. Portals and lookup tables trump trying to put that information into our heads. We also have a tendency to skip steps. We have some randomness in our architecture (a benefit: if we sometimes do it differently, and occasionally that’s better, we have a learning opportunity), but this means that we don’t execute perfectly.  However, we can use process supports like checklists.  Atul Gawande wrote a fabulous book on the topic that I can recommend. Other phenomena include that previous experience can bias us in particular directions, but we can put in place supports to provide lateral prompts. We can also prematurely evaluate a solution rather than checking to verify it’s the best. Data can be used to help us be aware.  And we can trust our intuition too much and we can wear down, so we don’t always make the best decisions.  Templates, for example are a tool that can help us focus on the important elements. This is just the result of several iterations, and I think more is needed (e.g. about data to prevent premature convergence), but to me it’s an interesting alternate approach to consider where and how we might support people, particularly in situations that are new and as yet untested.  So what do you think?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:53am</span>
So I’ve been pushing an L&D Revolution, and for good reasons.  I truly believe that L&D is on a path to extinction because: "it isn’t doing near what it could and should, and what it is doing, it is doing badly, otherwise it’s fine" (as my mantra would have it).  So many bad practices -  info-dump and knowledge-test classes, no alternative to courses, lack of measuring impact - mean that  L&D  is out of touch with the information age.  And what with everyone being able to access the web, content creation tools, and social media environments, wherever and whenever they are, people can survive and thrive without what L&D does, and are doing so. What I’ve argued is that we need to align with how we really think, work, and learn, and bring that to the organization. What L&D could be doing - providing a rich performance ecosystem that not only empowers optimal execution, but foster the necessary continual innovation - is a truly deep contribution to the success of the organization. I feel so strongly that I wrote a book about it.   If you’ve read it, you know it documents the problems, provides framing concepts, is illustrated with examples, and promotes a roadmap forward (if you’ve read and liked it, I’d love an Amazon review!).  And while it’s both selling reasonably well (as far as I can tell, the information from my publisher is impenetrable ;) and leading to speaking opportunities, I fear it’s not getting to the right people.  Frankly, most of my speaking and writing has been at the practitioner and manager level, and this is really for the director, and up!  All the way to the C-suite, potentially. And while I make an effort to get this idea into their vision, there’s a lot of competition, because everyone wants the C-suite’s attention. The point I want to make is that the real audience for this book is your boss (unless you’re the CEO, of course ;).  And I’m not saying this to sell books (I’m unlikely to make more than enough to buy a couple of cups of coffee off the proceeds, given book contracts), but because I think the message is so important! So, let me implore you to consider somehow getting the revolution in front of your boss, or your grandboss, and up.  It doesn’t have to be the book, but the concept really needs to be understood if the organization is going to remain competitive.  All evidence points to the fact that organizations have to become more agile, and that’s a role L&D is in a prime position to facilitate.  If, however (and that’s a big if), they get the bigger picture.  And that’s the message I’m trying to spread in all the ways I can see.  I welcome your thoughts, and your assistance even more.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:52am</span>
At the recent DevLearn conference, one of the keynotes was Adam Savage.  And he said something that gave me a sense of validation. He was talking about being a polymath, and I think that’s worth understanding. His point was that his broad knowledge of a lot of things was valuable.  While he wasn’t the world’s expert in any particular thing, he knew a lot about a lot of things.  Now if you don’t know him, it helps to understand that he’s one of the two hosts of Mythbusters, a show that takes urban myths and puts them to the test.  This requires designing experiments that fit within pragmatic constraints of cost and safety, and will answer the question. Good experiment design is an art as well as a science, and given the broad range of what the myths cover, this ends up requiring a large amount of ingenuity. The reason I like this is that my interests vary broadly (ok, I’m coming to terms with a wee bit of ADD ;).  The large picture is how technology can be designed to help us think, work, and learn.  This ends up meaning I have to understand things like cognition and learning (my Ph.D. is in cognitive psychology), computers (I’ve programmed and designed architectures at many levels), design (I’ve looked at usability, software engineering, industrial design, architectural design, and more), and organizational issues (social, innovation…). It’s led to explorations covering things like games, mobile, and strategy (e.g. the topics of my books).  And  more; I’ve led development of adaptive learning systems, content models, learning content, performance support, social environments, and so on.  It’s led me further, too, exploring org change and culture,  myth and ritual,  engagement and fun, aesthetics and media, and other things I can’t even recall right now. And I draw upon models from as many fields as I can.  My Ph.D. research was related to the power of models as a basis for solving new problems in uncertain domains, and so I continue to collect them like others collect autographs or music.  I look for commonalities, and try to make my understanding explicit by continuing to diagram and write about my reflections.  I immodestly think I draw upon a broad swath of areas. And I particularly push learning to learn and meta-cognition to others because it’s been so core to my own success. What I thrive on is finding situations where the automatic solutions don’t apply. It’s not just a clear case for ID, or performance support, or…  Where technology can be used (or used better) in systemic ways to create new opportunities. Where I really contribute is where it’s clear that change is needed, but what, how, and where to start aren’t obvious.  I’ve a reliable track record of finding unique, and yet pragmatic solutions to such situations, including the above named areas I’ve innovated in.  And it is a commitment of mine to do so in ways that pass on that knowledge, to work in collaboration to co-develop the approach and share the concepts driving it, to hand off ownership to the client. I’m not looking for a sinecure; I want to help while I’m adding value and move on when I’m not.  And many folks have been happy to have my assistance. It’s hard for me to talk about myself in this way, but I reckon I bring that  polymath ability of a broad background to organizations trying to advance.   It’s been in assisting their ability to develop design processes that yield better learning outcomes, through mobile strategies and solutions that meet their situation, to overarching organizational strategies that map from concepts to system.  There’s a pretty fair track record to back up what I say. I am deep in a lot of areas, and have the ability to synthesize solutions across these areas in integrated ways. I may not be the deepest in any one, but when you need to look across them and integrate a systemic solution, I like to think and try to ensure that I’m your guy. I help organizations envision a future state, identify the benefits and costs, and prioritize the opportunities to define a strategy.  I have operated independently or with partners, but I adamantly remain my freedom to say what I truly think so that you get an unbiased response from the broad suite of principles I have to hand.  That’s my commitment to integrity. I didn’t intend this to be a commercial, but I did like his perspective and it made me reflect on what my own value proposition is.  I welcome your thoughts.  We now return you to your regularly scheduled blog already in progress…
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:52am</span>
As I read more about how to create organizations that are resilient and adaptable, there’s an interesting emergent characteristic. What I’m seeing is a particular pattern of structure that has arisen out of totally disparate areas, yet keeps repeating.  While I haven’t had a chance to think about it at scale, like how it would manifest in a large organization, it certainly bears some strengths. Dave Grey, in his recent book The Connected Company that I reviewed, has argued for a ‘podular’ structure, where small groups of people are connected in larger aggregations, but work largely independently.  He argues that each pod is a small business within the larger business, which gives flexibility and adaptiveness. Innovation, which tends to get stifled in a hierarchical structure, can flourish in this more flexible structure. More recently, on Harold Jarche‘s recommendation, I read Niels Pflaeging’s Organize for Complexity, a book also on how to create organizations that are high performance. While I think the argument was a bit sketchy (to be fair, it’s deliberately graphic and lean), I was sold on the outcomes, and one of them is ‘cells’ composed of a small group of diverse individuals accomplishing a business outcome.  He makes clear that this is not departments in a hierarchy, but flat communication between cross-functional teams. And, finally, Stan McChrystal has a book out called Team of Teams, that builds upon the concepts he presented as a keynote I mindmapped previously. This emerged from how the military had to learn to cope with rapid changes in tactics.  Here again, the same concept of small groups working with a clear mission and freedom to pursue emerges. This also aligns well with the results implied by Dan Pink’s Drive, where he suggests that the three critical elements for performance are to provide people with important goals, the freedom to pursue them, and support to succeed. Small teams fit well within what’s known about the best in getting the best ideas and solutions out of people, such as brainstorming. These are nuances on top of Jon Husband’s Wirearchy, where we have some proposed structure around the connections. It’s clear that to become adaptive, we need to strengthen connections and decrease structure (interestingly, this also reflects the organizational equivalents of nature’s extremophiles).  It’s about trust and purpose and collaboration and more.  And, of course, to create a culture where learning is truly welcomed. Interesting that out of responding to societal changes, organizational work, and military needs, we see a repeated pattern.  As such, I think it’s worth taking notice.   And there are clear L&D implications, I reckon. What say you? #itashare
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:51am</span>
In some recent work, an organization is looking to find a way to learn fast enough to cope with the increasing changes we’re seeing.  Or, better yet, learn ahead of the curve. And this led to some thoughts. As a starting point, it helps to realize that adapting to change is a form of learning. So, what are the individual equivalents we might use as an analogy?  Well, in known areas we take a course. On the other hand, for self-learning, e.g. when there isn’t a source for the answer, we need to try things.  That is, we need a cycle of: do - review -refine. In the model of a learning organization, experimentation is clearly listed as a component of concrete learning processes and practices.  And my thought was that it is therefore  clear that any business unit or community of practice that wants to be leading the way needs to be trying things out. I’ve argued before that learning units need to be using new technologies to get their minds around the ‘affordances’ possible to support organizational performance and development.  Yet we see that far too few organizations are using  social networks for learning (&lt; 30%), for example. If you’re systematically tracking what’s going on, determining small experiments to trial out the implications, documenting and sharing the results, you’re going to be learning out ahead of the game. This should be the case for all business units, and I think this is yet another area that L&D could and should be facilitating.  And by facilitating, I mean: modeling (by doing it internally), evangelizing, supporting in process, publicizing, rewarding, and scaling. I think the way to keep up with the rate of change is to be driving it.  Or, as Alan Kay put it: "the best way to predict the future is to invent it".  Yes, this requires some resources, but it’s ultimately key to organizational success, and L&D can and should be the driver of the process within the organization.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:51am</span>
David Mallon kicked off the HR Tech X conf with a clear call for HR to be bold.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:51am</span>
Laura used Towards Maturity data to provide insight into how leading L&D organizations are making their way.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:51am</span>
One of the positive results of investigations into making work more effective has been the notion of transparency, which manifests as either working and learning ‘out loud‘, or in calls to Show Your Work.  In these cases, it’s so people can know what you’re doing, and either provide useful feedback or learn from you.  However, a recent chat in the L&D Revolution group on LinkedIn on Augmented Reality (AR) surfaced another idea. We were talking about how AR could be used to show how to do things, providing information for instance on how to repair a machine. This has already been seen in examples by BMW, for instance. But I started thinking about how it could be used to support education, and took it a bit further. So many years ago, Jim Spohrer proposed WorldBoard, a way to annotate the world. It was like the WWW, but it was location specific, so you could have specific information about a place at the place.  And it was a good idea that got some initial traction but obviously didn’t continue. The point, however, would be to ‘expose’ the world. In particular, given my emphasis on the value of models, I’d love to have models exposed. Imagine what we could display: the physiology of an animal we’re looking at to flows of energy in an ecosystem the architectural or engineering features of a building or structure the flows of materials through a manufacturing system the operation of complex devices The list goes on. I’ve argued before that we should expose our learning designs as a way to hand over learning control to learners, developing their meta-learning skills. I think if we could expose how things work and the thinking behind them, we’d be boosting STEM in a big way. We could go further, annotating exhibits and performances as well.  And it could be auditory as well, so you might not need to have glasses, or you could just hold up the camera and see the annotations on the screen. You could of course turn them on or off, and choose which filters you want. The systems exist: Layar commercially, ARIS in the open source space (with different capabilities).  The hard part is the common frameworks, agreeing what and how, etc.   However, the possibilities to really raise understanding is very much an opportunity.  Making the workings of the world visible seems to me to be a very intriguing possibility to leverage the power we now hold in our hand. Ok, so this is ‘out there’, but I hope we might see this flourishing quickly.  What am I missing?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:51am</span>
Changing behavior is hard. The brain is arguably the most complex thing in the known universe. Simplistic approaches aren’t likely to work. To rewire it, one approach is to try surgery. This is problematic for a several reasons: it’s dangerous, it’s messy, and we really don’t understand enough about it. What’s a person to do? Well, we do know that the brain can rewire itself, if we do it right. This is called learning. And if we design learning, e.g. instruction, we can potentially change the brain without surgery. However, (and yes, this is my point) treating it as anything less than brain surgery (or rocket science), isn’t doing justice to what’s known and what’s to be done. The number of ways to get it wrong is long. Information dump instead of skills practice. Massed practice instead of spaced. Rote knowledge assessment. Lack of emotional engagement. The list goes on. (Cue the Serious eLearning Manifesto.) In short, if you don’t know what you’re doing, you’re likely doing it wrong and are not going to have an effect. Sure, you’re not likely to kill anyone (unless you’re doing this where it matters), but you’ll waste money and time. Scandalous. Again, the brain is complex, and consequently so is learning design. So why, in the name of sense and money, do we treat it as trivial? Why would anyone buy a story that we can achieve anything meaningful by taking content and adding a quiz (read: rapid eLearning)? As if a quiz is somehow going to make people do better. Who would believe that just anyone can present material and learning will occur? (Do you know the circumstances when that will work?) And really, throwing fuzzy objects around the room and ice-breakers will somehow make a difference? Please. If you can afford to throw money down the drain (ok, if you insist, throw it here ;), and don’t care if any meaningful change happens, I pity you, but I can’t condone it. Let’s get real. Let’s be honest. There’s a lot (a lot) of things being done in the name of learning that are just nonsensical. I could laugh, if I didn’t care so much. But I care about learning. And we know what leads to learning. It’s not easy. It’s not even cheap. But it will work. It requires good analysis, and some creativity, and attention to detail, and even some testing and refinement, but we know how to do this. So let’s stop pretending. Let’s stop paying lip-service. Let’s treat learning design as the true blend of art and science that it is. It’s not the last refuge of the untalented, it’s one of the most challenging, and rewarding, things a person can do. When it’s done right. So let’s do it right! We’re performing brain surgery, non-invasively, and we should be willing to do the hard yards to actually achieve success, and then reap the accolades. OK, that’s my rant, trying to stop what’s being perpetrated and provide frameworks that might help change the game. What’s your take?
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:50am</span>
Roger gave his passioned, opinionated, irreverent, and spot-on talk to kick off LearnTechAsia. He covered the promise (or not) of AI, learning, stories, and the implications for education.
Clark   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Nov 22, 2015 04:50am</span>
Displaying 11617 - 11640 of 43689 total records