Four broad lessons that apply to addressing difficult employees.
Janice Burns   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:49am</span>
We wanted to create something new with SCORM Cloud. Something that could take advantage of the changes happening in learning online. Something that could change the way we think about tapping the internet. Something that anticipated the needs of educators and trainers. And we think we did it. And we aren’t the only ones. In fact, SCORM Cloud was just short-listed for the e.learning age awards in the most innovative new product category. Sweet! SCORM Cloud is just at the beginning of its impact. It’s already making life easier for people with big open-source LMSs, anyone needing to do SCORM testing and those offering training via WordPress. And we’re looking for new ways to stretch all the time. Upcoming innovations? How about using it with Google Apps for domains? That one’s in the works to be available soon and could be a dream for small businesses. And we’re hoping to let you know soon about ways people outside of Rustici are building on top of SCORM Cloud.
Mike Rustici   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:49am</span>
I don’t know how many times I’ve said to someone on the phone, "SCORM is difficult, especially for the LMS provider." There are many moving parts, countless interpretations, and vagaries in the specification itself. For the most part, we handle these things exceptionally well. Sometimes we make mistakes, and sometimes those mistakes can compound themselves. The Source of Today’s Problem In SCORM 1.2, mastery_score and lesson_status can interact with each other strangely. Frankly, the specification can be interpreted in two ways. From Section 3.4.4, "The SCORM Run-Time Environment Data Model", in the cmi.core.lesson_status section (henceforth called "The Narrow View"): After setting the cmi.core.lesson_status to "completed", the LMS should now check to see if a Master Score has been specified in the cmi.student_data_mastery_score, if supported, or the manifest that the SCO is a member of. If a Mastery Score is provided and the SCO did set the cmi.core.score.raw, the LMS shall compare the cmi.core.score.raw to the Mastery Score and set the cmi.core.lesson_status to either "passed" or "failed". If no Mastery Score is provided, the LMS will leave the cmi.core.lesson_status as "completed". From Section 3.4.4, "The SCORM Run-Time Environment Data Model", in the cmi.core.lesson_status section, incorporating text before and after "The Narrow View" (henceforth called "The Holistic View"): Additional Behavior Requirements: If a SCO sets the cmi.core.lesson_status then there is no problem. However, the SCORM does not force the SCO to set the cmi.core.lesson_status. There is some additional requirements that must be adhered to successfully handle these cases: Upon initial launch the LMS should set the cmi.core.lesson_status to "not attempted". Upon receiving the LMSFinish() call or the user navigates away, the LMS should set the cmi.core.lesson_status for the SCO to "completed". From above After setting the cmi.core.lesson_status to "completed", the LMS should now check to see if a Master Score has been specified in the cmi.student_data_mastery_score, if supported, or the manifest that the SCO is a member of. If a Mastery Score is provided and the SCO did set the cmi.core.score.raw, the LMS shall compare the cmi.core.score.raw to the Mastery Score and set the cmi.core.lesson_status to either "passed" or "failed". If no Mastery Score is provided, the LMS will leave the cmi.core.lesson_status as "completed". Herein lies the big difference. The bullets are intended only for the cases in which the LMS has been forced to manage the status on its own. In a piece of content that sets its status (as we’ll discuss below), we believe the LMS is not supposed to intervene with regard to the Mastery Score. What we did wrong, a while ago In SCORM Engine 2007.1, we went with this logic, which maps to the "Narrow View": If cmi.core.lesson_status has been set and cmi.core.score.raw has been set, compare the Mastery Score to the cmi.core.score.raw and set the status to "passed" or "failed". Ultimately, as this logic rolls up through the course, this tolerates content we believe is wrong and reads as "completion_status=complete and success_status=passed" or "completion_status=complete and success_status=failed" to the client LMS. Put another way, it cleans up the mistaken interpretations made by the content author. (It’s an understandable mistake.) This seems OK at first blush, but then you start running into content that expects the other behavior. If you’re a content author, one that reads the spec holistically, and you’ve intentionally set a value for lesson_status, and the LMS overrides it, that’s pretty confusing. If the spec were totally clear on the subject, we would stand behind it. Given that the spec is ambiguous here, we can appreciate the author’s point of view. So, we did what we do. We made accommodations. How we accommodate different interpretations of the specification We have long believed that the best way to have a highly compatible SCORM player is to accommodate different interpretations from content. This is a perfect example of why we do this, and it allows us to properly support content in a way that other LMSs and players just don’t. From our release notes for 2008.1: Mastery Score Overrides Lesson Status - In SCORM 1.2, there is a debate about when and if the LMS should override the lesson status reported by the SCO with a status determined by the reported score’s relation to the mastery score (i.e. if the reported score is 60 and the mastery score is 80, then should the LMS set the status to failed even though the SCO said the status should be passed?). This setting allows you to choose whether or not the LMS should override the status based on the score for this course. Alright, this is great, right? Now we can have our cake and eat it too. (The fact that cake is gross will have to be another post.) Every time we add a new package property like this one, we have to make a decision on the part of our clients. We have to decide what the default is. In some cases, this is easy stuff. When we’re tolerating departures from the standard, we simply go with the standard as the default. This is a tough one, though, because the spec is a bit ambiguous. In this situation, we go with what we believe is the correct interpretation of the standard. In this case, we decided to opt for "false", or, mastery score does not override status. We think that a content developer who’s smart enough to set his or her own status is also smart enough to retrieve the mastery score and compare against if they want to. We’re erring on the holistic side of things here, and I still feel good about this decision. I do not, however, feel good about our mistake. The Mistake We chose the default. We deployed the new version of the SCORM Engine. And we added the necessary columns as part of the upgrade script. In doing so, we used the default value. Big Mistake. Big. Huge. -Vivian, Pretty Woman (Note, this is not a wide spread problem. It’s isolated to content with an atypical interpretation, but it is very problematic for those courses. I just like to quote movies.) Some of our clients have content that expected the LMS to make the comparison against the Mastery Score even though they’d already set the status themselves. This content had functioned without issue for some time. And in upgrading to 2008.1, they introduced a problem with older content. With the new default, though, this is what happens. A course could set cmi.core.lesson_status to "completed" and then report a cmi.core.score.raw that exceeds the Mastery Score they’ve provided. The content could assume that the LMS logic defined in Section 3.4.4 (the narrow view) would then change the lesson_status to "passed". Because we’ve opted to go with the holistic approach by default, the status would in fact not be changed. This scenario, though, isn’t a big deal. The client LMS would still interpret this course as sufficiently completed and all would be well. The mistake manifests itself, though, when the cmi.core.score.raw is less than the Mastery Score. In this situation, the status values would remain "completion_status=complete and success_status=unknown". To the client LMS, this appears to be a course that is probably complete and has no testing, when in fact, it’s really a failed test. The conclusion? We have picked the right defaults for people going forward, but we probably should have set the defaults in the upgrade script to stick to the old behavior. (We have, in fact, gone back to the 2008.1 upgrade script and made this change for those of you who have yet to upgrade.) Now What? Well, we just, last night, discovered this side effect behavior, and it obviously merits immediate action for some clients. For those of you who ran against 2007.1 and have some concern that you may have courses that function like this, you can opt to revert to the old logic. If you’d like help doing just that, you can simply ask us for the queries to revert to that default. We’ll help you through that and we’ll help you examine any potential "false completions" that have happened since you deployed 2008.1. If you’re building a new SCORM Engine integration, you can opt to go with our defaults. It is our experience that more content (including some from a big authoring tool vendor) benefits from our new default behavior. But that doesn’t mean it catches every scenario. This is something that you and we will continue to be on the lookout for. In fact, we’re going to see if there’s any sort of a heuristic that we could deploy successfully to handle this ourselves. (We’re not optimistic, but we’d like to catch this one without human intervention.)
Mike Rustici   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:48am</span>
We made a small change in our reporting tool in the past month. You might not have even noticed. But it’s probably the biggest, baddest, best change we could have made. Because it fixes the source of many questions and much confusion about the tool just by changing to real-time reporting. We had originally meant for data in the reporting tool to update once a day. There’s this whole technical process where the form of the data has to convert from how it gets saved in the cloud and how it is reported in the tool. We really thought that once a day (and later, every 30 minutes) would be sufficient for admin types. Only, it wasn’t. In part because admins weren’t the only ones needing to see pieces of the data and in part because we’ve just gotten used to a world of real-time data. Our big concern - and the only reason we didn’t just jump on this one for a little while - was that the data transmogrification would throw a few kinks in the works and cause everything else to take longer. We dug a little deeper, spent a little time and realized this wasn’t going to be a problem. We’d hate to sacrifice performance in the process of adding a feature. So, enter the world of real-time reporting! Where you will actually notice this is when a learner completes a course, they can be informed of their completion status immediately. In addition, you can go straight to a launch history report and see what just occurred in a course where there was a problem. It will all be there, ready and waiting in real time.
Mike Rustici   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:47am</span>
A year ago, we talked about 5 things every piece of SCORM content should do.  Today, I wanted to mention 4 things every SCORM test should do… Keep in mind, SCORM tests are a subset of SCORM content, so they should be doing those same 5 things I mentioned a year ago.  Take that as a given.  This is just a further set of details. 1. Record your interactions with full detail SCORM provides a way for content to record the learner’s answers (and the questions) to the LMS.  Put simply, do it.  We see content all the time that elects not to report these interactions, and it’s a waste.  Even if the LMS doesn’t report well on this information today, it could do so ultimately.  And from a learning/remediation perspective, it’s crucial that the administrators of the LMS be able to see how their learners are progressing.  Just do it.  Nike would be proud. Now for the details.  Recording interactions can be done well or done poorly.  If you want to do it really well, and your LMS supports it, opt to use SCORM 2004, rather than SCORM 1.2.  SCORM 2004 allows content to be far more expressive in reporting interactions. Next, understand the data model elements.  I’ll spare you all the details here, but here are some highlights.  For the purpose of examples, pretend like we’re working with this question: What is my name?  A. Tim MartinB. Reggie BenesC. Dan StookD. Keith Bolliger Record the result (correct/incorrect) in cmi.interactions.n.result Record the learner response and correct response using a human readable identifier (or collection of them).  Better to record "Tim_Martin" than "A" if the learner answered the question correctly.  This gives the LMS an opportunity to share that data with the administrator in a useful fashion.  And in SCORM 2004, "Tim_Martin" is now a valid response pattern.  (In SCORM 1.2, "A" was the best you could do.) Use cmi.interactions.n.description.  Frankly, this is one of the best additions in SCORM 2004, allowing you to record that the question was, in fact, "What is my name?"  From a reporting perspective, this a vast improvement. If you’re going to go this far, you might as well complete the data model and record the following: cmi.interactions.n.type cmi.interactions.n.weighting cmi.interactions.n.latency cmi.interactions.n.timestamp 2. Understand the difference between state and journaling First things first… interactions are recorded in an array.  Take note of cmi.interactions.N.whatever.  That array is sequential, and each time a SCO wants to record something to it, it has to ask for the next available space (via cmi.interactions._count).  Separate from the N I’ve just mentioned, though, is the identifier of the interaction… cmi.interactions.n.ID. If a piece of content wants to record a 10 question test and have a slot for each of the 10 questions, it can do that, even if they allow the user to update their answers.  It would do so by cycling through the existing interactions and examining their cmi.interactions.n.ID to see if it matches the one that needs to be updated.  This technique of updating a given interactions values by cycling through the array and resetting those values is called "state" or "stateful".  The recorded interaction indicates the current state of those values.  It also eliminates any prior values that may have been recorded.  State is a valid approach to recording interactions. On the other hand, the array allows for you to simply add another value to the interactions array rather than seeking out the old array location and overwriting it.  In this case, the content would simply request the cmi.interactions._count value and record the new interaction data in that slot of the array.  In using this journaling technique, all of the historical values for that interaction are maintained.  If the content wishes to retrieve those values, say on relaunch of the content, though, it has to be more intelligent about discerning which of the answers was most recently given. Note, both journaling and state are valid option.  It’s crucial, though, that the content manage it’s concept of cmi.interactions.n.ID well though.  A piece of content that uses a new ID each time it reports and interaction is not properly journaling, because the association between multiple answers of the same question is lost. 3. Set completion status and success status In SCORM 1.2, completion status and success status were rolled up into a single entity, cmi.core.lesson_status.  It had six potential values, including completed, incomplete, passed, failed.  In this world, it was impossible for the content to tell the LMS if a failed status meant that the user should be allowed to take the content again or not.  Was it failed because they hadn’t finished?  Who knew? SCORM 2004, though, separates the concepts of passing and completing using two distinct data model elements: cmi.complete_status (completed, incomplete, or unknown) cmi.success_status (passed, failed, unknown) This allows the content to be more expressive about whether a failure was final.  Each content vendor is welcome to their own interpretation here, but making use of both completion_status and success_status is important in SCORM 2004. 4. Post a score Lastly, be sure to post a score. It’s such a simple thing to do, and it’s hugely useful to the LMS. Take note, in SCORM 2004, the posting of a score should look like this for a 10 question test on which you got 8 right. Set cmi.score.raw to 8 Set cmi.score.min to 0 Set cmi.score.max to 10 Set cmi.score.scaled to 0.8
Mike Rustici   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:47am</span>
Blackboard is big time. Blackboard Learn™ leads the higher education learning management world. Blackboard Learn serves more than 20 million learners. And starting in 2011 (or thereabouts), Blackboard is going to be rocking the SCORM Engine in Blackboard Learn. We are nothing short of thrilled to announce that Blackboard has signed a long term agreement with Rustici Software to deliver all SCORM and AICC based material in Blackboard Learn, their flagship product. Ultimately, we’ll let Blackboard tell their story of why they opted to go with the SCORM Engine, but this is what we know. Blackboard had a prior SCORM delivery setup based on an open source SCORM implementation, and they found it to be inadequate in supporting their customers. Blackboard considered building their own SCORM implementation, but realized they could do it better and more cost effectively by working with us. Blackboard considered other commercial SCORM technology… briefly. Blackboard’s adoption of our rock solid SCORM technology will make things better for Blackboard learners and those people who provide content to Blackboard. This is a huge step for us as well toward one of our long term goals: Rustici Software would like to provide the technology that makes every SCORM transaction go. Consider this an invitation to all of the big LMS providers. Each LMS provider that adopts the SCORM Engine reduces the pain associated with eLearning for the industry as a whole. Blackboard’s adoption of the SCORM Engine is a big step toward our goal. For more on Blackboard’s commitment to SCORM and open standards in general, check out today’s blog posts by Ray Henderson and John Fontaine.
Mike Rustici   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:46am</span>
The Big Announcement Well, here it is, the big announcement we hinted at with the obscure name "Project Tin Can". Rustici Software as been hired by ADL to help produce the successor to SCORM. For the next year, we will be conducting outreach, gathering requirements, proposing solutions and developing prototypes of a new "Experience API". Is this "SCORM 2.0″? Well, kinda sorta, but not really. This is much bigger. Just what this successor is and what it will be called isn’t formally decided yet. One thing is for certain though, ADL is thinking big. The "Experience API" is just one part of a larger framework that encapsulates all aspects of learning. It is an exciting time and we’re happy to be playing a big part in it. You can read all about it at http://www.scorm.com/tincan. The first phase of this project is all about outreach. That means you’re going to be hearing from us and we need your help. We’re not defining the next generation…you are! For now, check out the project site. You’ll find a collaboration area where you can vote on existing ideas, submit new ideas and participate in discussions. That’s just the start. Expect to hear a lot more from us over the next few months. We will be recruiting people to provide use cases and one-on-one interviews as well as highlighting particular areas for discussion. To stay up to date with the latest progress, you can: Follow us on twitter @projecttincan Follow the RSS feed or get email notifications on this blog Sign up for the feedback site and get notifications when changes are made
Mike Rustici   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:44am</span>
Yup, we’ll admit it, there’s been more than a few times we’ve asked for your feedback to shape the future of scorm…and yet, not much progress has been made. There’s this post, and this one, and this one, and this one, and this one, and this one, and even this one. If it seems like a lot of "here comes the next SCORM", you’re right, there’s been a lot of talk and precious little action. With the exception of the LETSI RTWS specification, not much has happened in the SCORM world for the past few years. But it’s not all for naught. Those earlier calls for feedback (especially the ones from LETSI) resulted in an enormous collection of data about what the industry needs to move forward. All of that feedback has been cataloged and is serving as a primary source of input into Project Tin Can. Is it for real this time? We think so. Either way, we’re charging full speed ahead and we hope you’ll come along for the ride.
Mike Rustici   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:44am</span>
Project Tin Can isn’t about us…it’s about you. We know SCORM, but that’s about all (ok, well, I make a mean chocolate chip cookie too). It’s you guys who know what learning is all about. You know how organizations need to train in the future. You have the ideas about how mobile learning, games, simulations, informal learning, etc are changing your worlds. We need you to tell us. We want to know how you see learning evolving. What technologies are most impactful? What are some ways people are thinking outside the box? What does your ideal world look like? What new and innovative approaches have you seen? Tell us what’s important in your corner of the world. Your perspective is as unique as Tim’s taste in music…and we want to hear it (unlike Tim’s music). If you’d be willing to give us 15 minutes of your time to have an impact on the future of learning, please let us know. Our goal is to do at least 100 interviews in the next couple months. 15 minutes on the phone not your thing? Head over to the Project Tin Can discussion board and vote/comment/submit…make your voice heard. More of the paper writing type, send those over too. We’ll incorporate any feedback you have to offer. Just drop us a line.
Mike Rustici   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:44am</span>
We’ve been talking to a lot of people about Project Tin Can and we’ve been hearing a lot of "Didn’t you just do that with the LETSI RTWS project?". Well, kinda, sorta, but not really. Here’s the difference…. LETSI RTWS is about right now. Project Tin Can is about the future. RTWS is about improving the technical implementation SCORM. Tin Can is about increasing the scope of what can be done. The LETSI RTWS project set out to solve a number of shortcomings of SCORM with a common sense solution that everybody agreed needed to be implemented (namely a web services interface for SCORM run-time communication). RTWS is ready to be implemented now, and a number of vendors are already jumping on the bandwagon (SCORM Engine and Cloud updates with RTWS are due in the next few weeks). RTWS solves many problems with the technical implementation of the current specification. It drastically expands the scope of what can be done with SCORM, but it doesn’t expand the scope of what SCORM does. In other words, RTWS removes many technical barriers to implementing things like remotely hosted content, offline/occasionally connected devices, serious games and simulations. However, fundamentally, RTWS is still doing the same thing as SCORM (tracking learner progress through e-learning content), it just does so in a different way. With Project Tin Can, we are tasked to dream big. We’re thinking beyond SCORM and into the future. Project Tin Can is all about imagining what can be done and charting a course to get there. That is why it is so important to get your feedback. Our imaginations are only so big…but collectively we can paint a picture of greatness. So, tell us, what should the world look like in 5 years? What can we do besides record the fact that somebody flipped through a pager-turner? How will people be learning, and what should we do with this knowledge? How does learning data need to interact with other data? Which systems should be talking? How does learning relate to the rest of the world? This is all part of ADL’s Future Learning Experience project. It is just getting started (with Project Tin Can). For some more context, check out this post from ADL’s Community Manager, Aaron Silvers.
Mike Rustici   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 05, 2015 03:43am</span>
Displaying 12501 - 12510 of 43689 total records
No Resources were found.