44 Shared Stewart Milton from BlueOrange is a world-class expert in e-Learning, and was generous enough to share ten useful tips for effectively creating e-Learning courses. Whether you’re a beginner or an expert yourself, these principles of e-Learning course authoring can provide you with an excellent basis for creating high-quality content every time. I have been very fortunate to have taught e-Learning applications and course design to so many people and organizations. In this short article I would like to share with you some of the lessons I have shared with so many talented people. I hope that you might be able to use some of my suggestions:Read more »
iSpring Solutions   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 03:02pm</span>
18 Shared PowerPoint is primarily known as a tool for creating presentations. However, this software also gives you the opportunity to create multiple choice quizzes, one of the most common types of test. Here’s a detailed guide on how to make a quiz in PowerPoint.Read more »
iSpring Solutions   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 03:02pm</span>
39 Shared MIT defines it as: "...structured opportunities to learn, which use more than one learning or training method, inside or outside the classroom. This definition includes different learning or instructional methods (lecture, discussion, guided practice, reading, games, case study, simulation), different delivery methods (live classroom or computer mediated), different scheduling (synchronous or asynchronous) and different levels of guidance (individual, instructor or expert led, or group/social learning)." More simply, a blended learning solution is a learning modality that combines offline and online aspects to get the best possible result for the students.Read more »
iSpring Solutions   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 03:02pm</span>
Sharon Shrock and Bill Coscarelli recently completed the third edition of their important book, Criterion-Referenced Test Development: Technical and Legal Guidelines for Corporate Training. If this book isn't in your collection already, I'll give you a link below to buy from Amazon.com. In this third edition, Sharon and Bill have updated the book from the second edition (published in 2000) in some critical ways. One of those ways is truly transformational for the workplace learning and performance field. I'll get to that in a minute. Also updated is the excellent chapter at the end of the book by Patricia S. Eyres, a lawyer with employment-law credentials. Her chapter covers the legal ramifications and guidelines in dealing with employee testing, especially as that testing affects employee selection, advancement, and retention. She has updated her chapter with new case law and legal precedent from that in the second edition. Most people in the training field have very little knowledge of the legal ramifications of testing, and I'd recommend the book for this chapter alone—it's a great wakeup call that will spur a new appreciation of the legal aspects of testing. In the second edition, Shrock and Coscarelli put forward what they call the "Certification Suite." In criterion-referenced testing, the goal is to decide whether a test taker has met a criterion or not. When they have met the criterion, they are said to be "certified" as competent in the area on which they were tested. The Certification Suite has six levels, some which offer full Certification and some which offer Quasi-Certification: Certification Real World High-Fidelity Simulation Scenarios Quasi-Certification Memorization Attendance Affiliation As the authors say in the book (p. 111), "Level C represents the last level of certification that can be considered to assess an ability to perform on the job." The truly transformational thing offered by Shrock and Coscarelli is that Level D Memorization, in the second edition of the book, was considered to offer Certification. NO MORE!! That's right. Two of our leading thinkers on testing say that memorization questions are no longer good enough!!!!!!!!!!!!!! Disclosure: In speaking with Bill Coscarelli in 2006, I gently encouraged this change. This is mentioned in the book, so it's not like I'm bragging. SMILE. I love this, of course, because it follows what we know about human learning. For tests to be predictive of real-world performance, they have to offer similar cues to those that learners will face in the real world. If they offer different cues—like almost all memorization questions do—they are just not relevant. And, from a learning standpoint (as opposed to a certification standpoint) memorization questions won't spur spontaneous remembering through the triggering mechanism of the real-world cues. This literal and figurative raising of the bar—to move it beyond memorization—should shake us to our core (especially since this is one of the few books on assessment that covers legal stuff—so it may have some evidentiary heft in court). If the compliance tests at the end of your e-learning programs are based on memorization questions, you are so in trouble. If your credentialing is based on completion (and 85% of our respondents in the eLearning Guild research report said they utilized completion as a learning measure), you are in even worse trouble. And, of course, if you ever thought your memorization-level questions supported learning, well, sorry. They don't! At least not as strongly as they might. Have you bought the book yet? You should. You ought to at least have it around to show management (or your clients) why it's important (absolutely freakin' critical) to use high-value assessment items. I've got some quibbles with the book as well. They list 6 reasons for testing. I've recently come up with 18, so it appears they're missing some, or I'm drinking too much. I also don't like the use of Bloom's Taxonomy to index some of the recommendations. In short, Bloom's has issues. I don't like the way they talk about learning objectives. They use the methodology of relying on a single objective to guide the process of both instructional design and evaluation. I am now advocating to free instructional-design objectives from the crazy constraint of being super-glued to the evaluation objectives. They need to be linked of course, but not hog-tied. I wish they emphasized more strongly the distinction between testing to assess and testing to support learning. They are different animals and most of us are confused about this. Overall, it's a great and thoughtful book. I bought it. You should too. Here's a link that will let you click here to buy. The Learning Measurement Series will continue in January... (But watch to see who wins this year's Neon Elephant Award, which I'll announce on Saturday (December 22nd 2007). The winner(s) is/are all about learning measurement.)
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 02:04pm</span>
Article in New York Times discusses research on group creativity. One thing the research has shown is that brainstorming may not be as beneficial as once thought--because individuals working alone come up with better ideas, AND the group needs to improve those ideas.Did you know Einstein's original calculations around e=mc2 needed to be refined by others?Nice article. Note: I was clued in to this article by reviewing my Twitter page, where Clark Quinn had tweeted about this.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 02:03pm</span>
Here is the comment I sent to the NY Times in response to their focus on a supposed research study that purported to show that gifted kids are being underserved. I'm a little over the top in my comments, but still I think this is worth printing because it demonstrates the need for good research savvy and it shows that even the most respected news organizations can make really poor research-assessment mistakes. Egads!! Why is the New York Times giving so much "above-the-fold" visibility to a poorly-conceived research study funded by a conservative think tank with obvious biases?Why isn't at least one of your contributors a research-savvy person who could comment on the soundness of the research? Instead, your contributors assume the research is sound.Did you notice that the references in the original research report were not from top-tier refereed scientific journals?In the original article from the Thomas Fordham Institute (a conservative-funded enterprise), the authors try to wash away criticisms about regression-to-the-mean and test-variability, but this bone against the obvious--and most damaging, and most valid--criticisms is not good enough. If you took the top 10% of players in the baseball draft, the football draft, any company's onboarding class, any randomly selected group of maple trees, a large percentage of the top performers would not be top performers a year or two later. Damn, ask any baseball scout whether picking out the best prospects is a sure thing. It's not!And, in the cases I mentioned above, the measures are more objective than an educational test, which has much higher variability--which would make more top performers leak out of the top ranks.NY Times--you should be embarrassed to have published these responses to this non-study. Seriously, don't you have any research-savvy people left on your staff?We have scientific journals because the research is vetted by experts.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 02:03pm</span>
I just read a vendor blog post that lists the pros and cons of gamification. PLEASE, let us be smarter than this! Gamification is NOT a THING !!!!!!!!!!!!!!!!!!!!!!! There can be NO pros and cons to gamification... Gamification is a label for dozens of specific factors, each of which can be used or not used, or used alone or in concert with other gamification learning factors. Here is a small list of gamification factors (just off the top of my head): competing against a standard competing against others being given some sort of non-tangible "award" for perseverance being given some sort of non-tangible "award" for some level of success being given some sort of non-tangible "award" randomly as you "play" working on a team escaping a threat working toward a specific goal performing with a time constraint et cetera (ad infinitum?) [Hey, if anybody has published a list of gamification factors, let me know and I'll post it.] Seriously, when we oversimplify, we not only show our ignorance of the magical complexity of human learning and cognition, we also hurt our own thinking and problem solving and those of every person with whom we are communicating. Sure, some vendor wants to sell gamification. I get that. But what is really being said is: some vendor wants to sell gamification to the most vulnerable within our profession (newbies, etc.) and even to the less vulnerable and even to the best-and-brightest who may have a temporary brain freeze from such miscommunication. No, no, no! Sorry! That's too cynical, right! Probably the vendor honestly thinks that their list of pros and cons is being helpful. Probably the vendor doesn't really understand that all of us must look more deeply than our industry's surface ripples. How to end this blog post? Hmmm. This is difficult. I'm not sure. Okay, I got it. Final advice: Evaluate the labels used in the learning industry. Seek the constituent factors. Get data on their causal effects. Hire learning experts from time to time to reality check your learning designs. Give yourself a gold star for reading this blog post to the end. You have reached WAWL Level 2, performing better than 92.4% of your colleagues! To get to WAWL Level 3, do a Google Search of Gamification, find a list of Gamification Factors, and send me the link. To get to WAWL Level 4, create your own list, reflect on what you discover, post it somewhere, and send me the link. May the forces of the Neo Elepha keep you safe on your journey. WHO-LA!  
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 02:03pm</span>
I created a video to help organizations fully understand the meaning of their smile sheets.   You can also view this directly on YouTube: https://youtu.be/QucqCxM2qW4
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 02:03pm</span>
I'm delighted to be attending the eLearningGuild's DevLearn conference in Las Vegas coming up in late September and early October.     The eLearning Guild always puts on a great conference and I'm excited to learn the latest and greatest on elearning and mobile learning. This year, I'm going to be keeping my eyes out for examples of micro learning and subscription learning -- as I see more an more interest in smaller learning nuggets. Also, I'll be speaking on "Measuring eLearning to Create Cycles of Improvement." In my session, I'll share research-based findings and their implications for elearning measurement designs. Come join me 10:45 AM - 11:45 AM Thursday, October 1.
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 02:03pm</span>
Over the last year or so I've been doing a lot of thinking about learning at conferences. I've also taken to the conferences-on-conferences circuit to speak on the issue. All too often, conference speakers don't follow science-of-learning prescriptions. Conference sessions may make the audience happy, but may not provide the kinds of supports that help people remember and apply what they've learned. This is bad for conference attendees and their organizations -- because they never realize the benefits of what was learned. But it's also bad for conference organizers as well -- because their customers may not be getting all the value they might be getting. A few days ago, Michelle Russell, editor at Convene Magazine, wrote a great article on implementing the science of learning in conferences. She interviewed me and Peter C. Brown, co-author of the wonderful book, Make it Stick, and winner of the Neon-Elephant Award last year. Here's Michelle's Article: Click to access the article on the Science of Learning for Conferences It's a great read. Michelle does amazing work. I recommend you read the article now and then leave your reflections here so we can get a conversation started. Making changes to conference learning is not easy. Traditions and expectations push against innovations. Still, in attending several recent conferences, I've noticed some very different formats being used to great acclaim.   Jeff Hurt My go-to expert on conference learning is Jeff Hurt of Velvet Chainsaw. Indeed, it was Jeff who got me thinking seriously about conference learning. We've even co-presented on the topic a number of times. Here are two blog post by Jeff that describe typical dangerous assumptions about conference learning: Part 1 Part 2   Improving Keynotes Here's an article I wrote on how to improve the learning benefits of Keynotes: Click to access my article on Improving the Learning Benefits of Keynotes   Learning Coaches In Michelle's article above, Peter C. Brown recommended that conferences have learning coaches to help support speakers and attendees in learning. I'd love to take on that task. And I'm curious. Have you seen anyone play that role? What works? What doesn't?   What are Your Reflections on Conference Learning? Also, I'm wondering what your experiences are around learning at conferences...Leave comments below...
Will Thalheimer   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Dec 07, 2015 02:02pm</span>
Displaying 7441 - 7450 of 43689 total records
No Resources were found.