Some great posts are already out there around the Big Question for January: Quality vs. Speed. You can find them listed in the post. One thing that is definitely clear from the posts so far is that there seems to be a really difference of opinion around: While you may be able to reduce development time via rapid tools, can you speed up analysis and design and still maintain quality? Does
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:40am</span>
I love doing a good training audit as much as anyone. But you can do a lot yourself. When I review training budgets for clients, one tool I use is to break spending into three categories: 1. The money spent to to fulfill obligations.These are the funds that are spent doing what the training group had committed to doing in the past. These might include ethics and sexual harassment training, new
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:39am</span>
I just got results back from two pilots with the goal of evaluating the same, 15 hour training program. In one pilot, the program sponsor cajoled a bunch of colleagues into doing the program. He oversold the fun aspects of it, and undersold the real work and time requirement required. Literally less than 20% into the program, the coach had to (appropriately) push on the participants for not
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:39am</span>
Let me ask you all this one question, which I was asked recently by a well-known governor. If we could change just one thing in the U.S. school system, what would it be?Is there any chance that we can break the Bryan Chapman critique that we as a community of education and training professionals:all agree on our dislike on the current system and disagree on what to do better.Could we work together with our various degrees of clout and make things better in the school system? Stage one is brainstorming, so let's air all of the ideas.
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:39am</span>
It's the beginning of another month! Time for another The Big Question. November's Question got passions raging and great conversation with real substance whose embers are still glowing as more comments still are flowing in. While we had fewer participating posts this month (21 registered and 10+ thus far found that didn't register with us), the level and depth of the conversations were much
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:38am</span>
I am often asked to prove that simulations work better than traditional formal learning programs. As a tiny bit of background, I wrote this a few years ago in my column in Online Learning magazine:"People often ask me what the return on investment (ROI) of e-learning is. I tell them it's 43 percent. How did I come up with that figure? Truth be told, I made it up. That's because knowing the ROI of e-learning is sort of like knowing that the average depth of the ocean is 2.5 miles. Interesting, but not very helpful to a ship's captain."Given that, I have done some studies for both my own simulations (Virtual Leader), and others (Ngrain). I have interviewed countless practitioners, users, and sponsors. I have been involved in surveys and studies. I have argued that simulations come in genres (such as branching stories), and that analysis should be done at the genre level. And I still have no idea how to approach that question. I don't even know who is qualified to measure effectiveness, or even define what effectiveness is. Even within a neutral body, there are advocates who do the study. So what do people think? I am not asking, are simulations more effective? I am asking, what is the simplest argument that you would find compelling?
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:38am</span>
(originally posted by Clark Quinn)Well, I really want to reply to Peter, but right now I've got a bee up my bonnet, and I want to vent (how's that for mixing my metaphors?). I'll get to Peter's comments in a moment...In recent work, I've reliably been coming up against a requirement for a pre-test. And I can't for the life of me figure out why, they're not using the data to do anything but compare it to the post-test! This didn't make any sense to me, so, I did a Google search to see what came up. In "Going Beyond Smile Sheets... How Do We Know If Training Is Effective?" by Jeanie McKay, NOVA Quality Communications, I came across this quote: [Level Two] To evaluate learning, each participant should be measured by quantitative means. A pre-test and post-test should be administered so that any learning that takes place gets attributed to the training program. Without a baseline for comparison of the as-is, you will never be able to reveal exactly how much knowledge has been obtained.Now I don't blame Jeanie here, I'm sure this is the received wisdom, but I want to suggest two reasons why this is ridiculous. First, from the learner's point of view, having to do a pre-test for content you're going to have to complete anyway is just cruel. Particularly if the test is long (in a particular case, it's 20 items). The *only* reason I can see to do this is if you use that information to drop out any content that the learner already knows. That would make sense, but it's not happening in this case, and probably in too many others.Second, it's misleading to claim that the pre-test is necessary to assess learning. In the first place, you should have done the work to justify that this training is needed, and know your audience, so you should have already established that they require this material. Then, you should design your post-test so that it adequately measures whether they know the material at the end. Consequently, it doesn't matter how well they knew it beforehand. It might make sense to justify the quality of the content, but even that's falacious. We expect improvement in pre-post test designs (this is forbidden in psychology as a mechanism to determine the effectiveness of an intervention, without a control group), so it doesn't really measure the quality of the content. Though it could be considered a benefit to the learning outcome, there are better ways to accomplish this. There is no value of the pre-test in these situations, and consequently it's cruel and unusual punishment for the learner and should be considered unlawful.OK, I feel better now, having gotten that off my chest. So, on to Peter's comments. I agree that we want rich content, but if we have the current redundancy to address all learners, we risk being boring to all to make sure everyone's learning style is covered. We *could* provide navigation support through the different components of content to allow learners to choose their own path (and I have). That works fine with empowered learners, but that currently characterizes no more than about half the population. The rest want hand-holding (and that's what we did), but that leaves the redundancy.Which, frankly, is better than most content (although UNext had/has a similar scheme). However, I'm suggesting that we optimize the learning to the learner. I'm not arguing to assess their cultural identity, but to understand the full set of capabilities they bring to bear as a learner (my cultural point is that we're better off understanding them as individuals, not using a broad cultural stereotype to assume we understand them). That is, for some we might start with an example, rather than the 'rule' or 'concept'. For some we might even start with practice. We might also present some with stories, others with comic strips or videos. Morever, we drop out bits and pieces. A rallying cry: Use what we know to choose what to show. Yes, additional steps in content development are required to do this (see my IFETS paper), but the argument is that the payoff is huge...The assessment is indeed a significant task, but in a long-term relationship with the learner, we can do something particularly valuable. If we know what their strengths and weaknesses are, as a learner, we can use the former to accelerate their learning, and we can also take time and address the latter. A simple approach would be to present 'difficult' content with some support that, over time, would be internalized and improve the learner's capabilities. Improving the learner as a learner, now THAT's a worthwhile goal!I strongly support Peter's suggestion that using a rich world as a source for embedding (or extracting) learning to make it meaningful is ultimately valid, and the base of much of my work on making learning engaging. We may be agreeing furiously, except that I may not have made what I meant by learner assessment clear.In answer to Peter's query, I'm sad to report that we have not, and can not, publish on the 31 dimensions. I can only suggest the path we took: using Jonassen & Grabowski's Handbook of Individual Differences as an uncritical survey of potential candidates, as well as other likely suspects from any other source your research uncovers, then to make some sensible inferences to remove redundancies (much as the 'Big 5' personality factor work attempts to make sense of personality constructs). Make sure you cover the gamut of things that might influence learning, including cognitive, affective, and personality factors.
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:38am</span>
(originally posted by Peter Isackson)Curiously, Clark and I don't seem to know for certain whether we "furiously" agree or not. This seems rather typical of the whole learning business. I tend to agree with Clark that we do agree! The problem is that at different times we are probably referring to different phenomena. My suggestions were very general, pointing towards the overall strategy for handling a variety of content, which I see as process (transforming input into output). I also glanced at questions of content selection in the light of cultural variation. When we focus on specific content needs, particularly the "learning objects" we hope to find somewhere or need to produce ourselves, we are faced with these cultural problems, which, as Clark points out, constitute helps or hindrances depending on 1) the profile of the individual learner 2) the trainer's awareness or even real knowledge of that profile. I think a lot of work needs to be done on both at the same time. I don't believe we have any valid human models yet for dealing with this efficiently (i.e. converting information into effective strategy) and everyone else (i.e. the knowledge management specialists) seems to be focused on structuring the information. I believe that this is only the first step and may need some guidance from the strategy side to develop the right structural models.A new theme occurred to me today and I have no idea what it's worth or how far it can be taken, so for the sake of my own ongoing reflection I'll state it here (I need to set it down somewhere!) and await any constructive or, why not, destructive criticism. It is curiously linked to the bee in Clark's bonnet, but inverted (the stinger is on the other end!). The notion has to do with the teacher’s or trainer’s state of knowledge -- not the learner’s -- before and after a course. I am not, however, suggesting pre- and post-testing! I am suggesting that it should evolve, almost as much as the learner’s state of knowledge and that we should take an interest in tracking this evolution. The context I am referring to is that of collaborative online learning. This wouldn’t be the same thing for traditional face-to-face teaching (but see my final remarks below), and even less so for pre-programmed eLearning (which I see increasingly as isolated or modular learning objects, whose meaning and impact derive from the variable contexts in which they are used more than from their internal merits).My notion is that of a kind of open or "improvisational teaching", a strategy that specifically aims at learning to teach a particular course by teaching it, after defining its overall structure and logic. It proceeds from two observations:1) no one can fully anticipate what will happen in the learning process, particularly in distance learning,2) we do not necessarily know in advance what resources, among all that are available, will prove the most productive for real learners (in all their cultural variety).My notion of improvisation is borrowed from jazz, one of my previous occupations*. To be good at improvising, you have to learn not only the art of soloing (which you at least partly invent), but you must also know the chord changes (+ variations) of the tunes you are playing, the chosen style for each number, your precise role in the ensemble sections and, especially if you are accompanying rather than just soloing, have a good idea of the style and system of each of the other players. These multiple constraints nevertheless leave you free to discover through playing the things that work and don’t work both in general and specifically with regard to each type of musical event. The most interesting thing about working with other musicians is what you learn from them each time you rehearse or play. And of course the more you play a particular tune, the easier it gets to keep it going and to find ways of innovating and surprising without upsetting the underlying logic and the other musicians.In short, I’m in favor of under-planning one’s course strategies and leaving room to for us to learn from the learners themselves. Actually it’s less under-planning than avoiding over-planning. This means, without sacrificing one’s "authority", learning how to encourage the learners to bring things to you (discovery of appropriate resources you may not have been aware of, new ideas or ways of looking at the material, patterns or sequences of behavior that produce learning more effectively than your initial game plan). In other words, we should seek to be instructional co-designers rather than instructional designers.It might be said that what I’m describing is a form of beta testing. But its implications are very different. You beta test something that is fully designed down to the last detail. What I’m suggesting is a system in which we as trainers and designers are actively concerned, at least the first time around, to integrate elements that come from the learners, or rather our own interaction with the learners. This can obviously only apply to collaborative training. But it can lead to strategies for producing learning objects. Much needs to be said on how to conduct this approach (how to create the overall model, how to manage events, how to communicate with learners, how to react to embarrassing mistakes, how to make permanent or replicable everything one learns, etc.).After a brief search on the web, I found that David Hammer of the University of Maryland, in a context of traditional face-to-face instruction, calls a similar approach "discovery teaching" and identifies some of the areas of resistance to it by teachers. My contention is that it is less risky and more appropriate in an online environment. It is also easier to structure, plan and capitalize on.* I ended up living in Paris because, after participating in a free-for-all jam session organized by Steve Lacy at the American Center nearly 30 years ago, I was offered a permanent job as a pianist (accompanying dance classes at the Université de Paris) and accepted it in order to become fluent in French!
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:37am</span>
Hi everyone. Some of you may have caught glimpses of some work I'm doing behind the scenes of LCB. If you had a chance to see Peter Isackson's post on improvisational learning from May of 2002 that accidentally was at the top of this page for the last 24 hours or so, you got a glimpse of the future by seeing the past.I've begun the process of consolidating all of the posts of LCB from 2002 to the present into one environment. This effort is in anticipation of a migration to a new environment sometime in the near future.If your the type to volunteer to do some grunt work, keep your eyes on this space. As I get a bit more organized and have a feel for the tasks that will need to be completed to achieve the transformation of LCB, I'll be looking for some help. If you can't wait to volunteer, please let me know by commenting on this post and I'll be sure to find something for you to do!Now, back to the Big Question for now Dorothy, the balloon's not ready to depart for Kansas quite yet!
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:37am</span>
The February Big Question goes to the root of what The Big Question is all about. It is a topic that has bothered Tony for a while. In a session on Informal Learning by Jay Cross, Harold Jarche and Judy Brown at ASTD TechKnowledge, you could easily see great questions getting raised by both the presenters and the audience. "How can I help my organization improve the quality and quantity of conversations?" and "How can I create informal learning experiences for new managers in my organization?" These questions offered a fantastic opportunity for discussion and understanding of the subject.Tony’s revelation was that one of/if not THE biggest questions facing us is that we don't know the right questions to ask in a given situation. Sometimes we’re asking a question when we should be asking a different one.So, this month, The Big Question is...What Questions Should We Be Asking?Please answer this question by posting to your own blog or commenting on this post.(For further help in how to participate via blog posts, see the side bar.)Point to Consider:Feel free to list questions from lots of different perspectives and at lots of different levels. One last note. Don’t worry about answering the questions you suggest. Perhaps we’ll do that in future.Participating Blogs:The form for February's Big Question has been closed. If you have a post in response to the February Big Question, please contact the Blogmeister by using the Dear Blogmeister form which can be linked to from the top of the sidebar.NOTE: If the forms do not appear below, please hit your browser’s refresh button. If the forms still do not appear, please use the Dear Blogmeister form which can be linked to from the top of the sidebar.Comments FeedUnfortunately, it seems that the "New Blogger" handles the metadata regarding comments differently than the "Old Blogger." The new way is not compatible with CoComment - at least for the time being. Since so many of our wonderful Big Question participants are using Blogger, this has rendered the process I've been using to create the comments feed pretty much useless unless I do a tremendous amount of handwork once I get the CoComment feeds over to MySyndicaat.Since I've got far too much on my plate already, I'm opting to search for a new solution to incorporate into the "new LCB" that's being worked on. So please accept my apologies, but there won't be a comments feed as a part of The Big Question for the next few months. - Dave, your humble blogmeister.
The Learning Circuits Blog   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 03:36am</span>
Displaying 20971 - 20980 of 43689 total records
No Resources were found.