Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

The field of analytics in higher education is relatively new and descriptions are often imprecise. Different types of analytics, with little in common, are regularly lumped together. At the 1st International Conference on Learning Analytics and Knowledge in 2011, analytics was defined broadly as the "measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs." Analytics comes in many shapes and sizes, depending on: End-users Type of data required Sources of the data captured Purpose or intended goal We propose three categories of analytics in online higher education, each concerned with student performance: Institutional Analytics Engagement Analytics Learning Analytics Identifying and defining the different categories is an important first step in helping educational professionals more quickly focus on the analytics that are of most value to their institutions. Institutional Analytics Institutional analytics is primarily concerned with tracking learners through their educational lifecycle, from enrollment to graduation. The data collected focuses on information such as student profiles (age, address, ethnicity), course selections, pace of program completion, use of support services and graduation rates. Data needed for institutional analytics is often readily available in colleges and universities within the institution’s registration system. The task, then, is to organize the data and identify the metrics that are most important to the institution. The value of the information can be multiplied by linking this data to other systems, such as enrollment and learning management systems, student support applications, and customer relationship management software. Analytics of this type is commonly the purview of the institution’s internal research and business analysts. Much of the data required can be found within the institutions’ registration system, but the value of the information can be multiplied by linking this data to other systems, such as enrollment and learning management systems, student support applications and customer relationship management software. Analytics of this type is commonly the purview of the institution’s internal research and business analysts. Institutions use this information in a variety of ways to: Align recruiting tactics with institutional aid Identify better recruiting practices to improve retention and completion rates Predict high-risk students earlier in order to provide more targeted support External demand for this kind of information is growing as state and regulatory bodies seek to better monitor (and reward) certain types of institutional performance. Similarly, more institutions are distributing information to help students and parents make more informed decisions about programs and schools. Examples of software used to support institutional analytics includes: iDashboards SAP SAS Engagement Analytics Unlike institutional analytics, engagement analytics track student activity within the course environment, which is typically the learning management system (LMS). The information generated can be of value to the institution, students and instructors. But most of the information is designed with the instructor in mind, keeping with the overarching instructional model of higher education. The type information collected in engagement analytics often includes: Number of page views (per page) Contributions by students to discussion threads Which students (and what percentage of the total cohort) have completed the assignments Number of logins This information, as with other types of analytics, is presented in a visual format, often as a dashboard, with its roots in business intelligence software. A well-constructed visual display of data makes interpreting course activity faster and simpler. When used effectively, this information can help instructors and institutions identify students who may need additional support and encouragement, and help determine the most effective type of student support intervention. However, engagement analytics do not necessarily measure learning, per se. What’s measured is student activity, which may or may not signal actual learning. For example, engagement analytics is often used to track student page views. The student’s presence on that particular page within the course site tells us that the student has been exposed to that part of the curriculum. But it doesn’t tell us whether the student understands the curriculum. In fact, it may be that the student inadvertently left the browser window open while searching the Internet. Writer and researcher Stephen Downes, who specializes in online learning, describes the challenge of using engagement analytics this way: "There are different tools for measuring learning engagement, and most of them are quantificational. The obvious ones [measure] page access, time-on-task, successful submission of question results - things like that. Those are suitable for a basic level of assessment. You can tell whether students are actually doing something. That’s important in certain circumstances. But to think that constitutes analytics in any meaningful sense would be a gross oversimplification."[1] Examples of software used for engagement analytics includes Schoology Blackboard Analytics for Learn™ Star Fish Retention Solutions Ellucian Course Signals Learning Analytics Learning analytics measure the student’s actual learning state; what students know, what they don’t know, and why. We propose that the category of learning analytics be reserved for analytics that actually measure changes in a students’ knowledge and skill level, with respect to specific curriculum. The insights generated from true learning analytics support optimization of learning through information, recommendation and personalization. Learning analytics are actionable. Examples of the type of information that can be captured by learning analytics include: What aspects of the course did the student master? Which students are struggling, and with which concepts, topics and problems? What misconceptions about the curriculum are leading to poor performance? What topics require more attention or better presentation? The data for learning analytics is captured through frequent formative and summative assessments. Based on the data generated from a student’s interaction with these assessments, it is now possible, as a result of extensive research at Carnegie Mellon University and elsewhere, to derive strikingly accurate measurements of student knowledge and skills. Learning theory offers explanations of the mechanisms of learning (e.g., the power law of learning, cognitive load, rate of learning, learning decay, etc.) and these cognitive factors can be incorporated into learning analytic models to measure, predict and respond to student performance in the online course. The insights produced by learning analytics can be used to create dashboard-style reports of student performance or to modify a student’s experience in real-time, or both. Learning dashboards give learners, faculty and institutions a visual snapshot of each student’s performance, as it relates to specific learning objectives. The information can also be used in real-time to continuously adapt the instructional activities (e.g., level of difficulty) presented to the learner to match individual needs. Automated recommendations help create a personal path of learning for each student. Practice is personalized so that students receive the right amount of practice, targeted at the right level, for the right topics. Instructors and mentors receive dashboards and alerts to guide more timely and effective interactions and interventions. Instructional design teams use the data to measure efficacy of course materials, and continuously improve the learning experience, saving time and resources. Each of the three types of analytics offers value. And there is inevitably some overlap between the approaches. But learning analytics is the only approach upon which educators can confidently determine the actual state of a student’s learning. It provides a true foundation for new opportunities to improve and optimize learning. [1] Collaboration, Analytics, and the LMS: A Conversation with Stephen Downes. Retrieved February 6, 2014. http://campustechnology.com/newsletters/ctfocus/2010/10/collaboration_analytics_and-the-lms_a-conversation-with-stephen-downes.aspx By John Rinderle, Chief Architect & Dr. Keith Hampson, Managing Director, Client Innovation at Acrobatiq.
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:47pm</span>
Erik Moody, Assistant Professor of Psychology at Marist College, says "I know high-quality online courses with analytics work because we do research on it."  In 2011, Moody was part of a research project that used predictive analytics to help instructors identify at-risk students and intervene early to promote student success. As a result, Marist received the 2013 Campus Technology Innovators Award in the teaching and learning category.   What prompted your first foray into online learning this semester? My interest was spawned from a grant-funded learning analytics research project, the Open Academic Analytics Initiative (OAAI), that we launched in 2011. The OAAI ran a study involving instructors and student at four participating institutions (Marist, Cerritos College, College of the Redwoods, and North Carolina A&T State University). At-risk students were identified using data generated from course management systems and other demographic data. Some of these students were alerted about their at-risk status. Another group received alerts as well as access to additional intervention resources. The basic findings confirmed that even a simple intervention, if provided early, could help students improve their trajectory. The study highlighted important issues that we plan on investigating, such as the timing of an early alert and an observed increase in withdrawal rates. An in-depth description of the study will soon be published in the Journal of Learning Analytics. How did the research project lead to the hybrid Introduction to Psychology course you’re teaching this semester? Intro to Psychology at Marist is a traditional sit-down course meeting twice a week for 75 minutes. I’m supplementing the textbook with Acrobatiq’s Introduction to Psychology for the assessments and analytics. I’m interested in using Acrobatiq’s courseware in a traditional classroom setting.  I believe that the learning experience will be enhanced by using the strategies implemented in the platform — such as numerous and frequent assessment with immediate feedback — in a tradition setting, which allows for personal contact and one-on-one interactions. In other words, you get the best of both worlds. In the future, we may move to a true hybrid, where the class time is cut in half freeing up more time for research, committees, advisement, etc. What’s the feedback so far on the online exercises and assessments? Students seem to like it. Some like the instant feedback on how they’ve done after the checkpoints. Some do very well. For some, it’s still so new. One of the benefits for students of a hybrid course is that it gives them an opportunity to learn in two different environments; it broadens the formats in which they are exposed. From a teaching perspective, what features do you find most beneficial? I like the ‘early and often’ assessments and the fact that students can get instant feedback on the exercises as they complete them, instead of waiting until the quiz is graded. Having an opportunity for them to learn from their mistakes, prior to the assessment, and learn quickly, is very advantageous. Also, as students are doing the assignments, there is a constant stream of emails from them asking me questions. This shows me they are engaged. It seems to have generated a lot more electronic communications outside of the classroom. For me, it’s an indicator that students are engaged and participating. How are you using The Learning Dashboard™? I’m concerned with reaching students early enough to make a difference.  I really like the technology and see great potential. I’m very impressed with the organization of the dashboard. The opportunity to get feedback on student performance early and frequently not only allows a student to identify and address a problem, the online format provides a rich source of data for learning analytics to provide feedback to students and instructors, efficiently and quickly.  It would be nice if the checkpoint outcomes were presented in a format that were easier to review in the classroom. Often struggling students don’t recognize that their efforts are not sufficient until it’s too late to recover from a poor or failing grade. Other students remain in denial until confronted by their instructor. A system that assists an instructor in identifying at-risk students early will speed and facilitate an intervention, which might allow a student to improve their performance resulting in better student outcomes. What would you say to instructors, who care about improving student outcomes, but are reluctant to change what they’ve always done? Everybody is interested, but there’s still a lot they don’t know about the technology. And it’s a change, not the status quo. As educators, we have a responsibility to stay current with different educational resources and approaches. I know high-quality online courses with analytics work because we do research on it.  Anything we can do to offer a great education and free up some time is valuable to tenured professors and adjuncts alike. References: Jayaprakash S., Moody E., Lauría E., Regan J., Baron J., (2014) "Early Alert of Academically At-Risk Students: An Open Source Analytics Initiative", Journal of Learning Analytics (accepted, pending publication) Lauria, E., Moody, E., Jayaprakash, S., Jonnalagadda, N., & Baron, J. (2013). Open Academic  Analytics Initiative: Initial Research Findings. Proceedings of the 3rd international Conference on Learning Analytics and Knowledge. ACM New York, NY, USA, 150-154   doi:10.1145/2460296.2460325  For more on analytics, download Understanding Analytics, the latest paper in the Acrobatiq Insights series or view an excerpt from our recent webinar on the Essentials of Analytics in Online Higher Education.    
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:47pm</span>
Learning management systems have been the dominant technology in online higher ed since the late 1990s. These systems are, by design, as instructionally agnostic as possible. That is, they are designed so as to not impose specific instructional strategies on the educators that adopt them. This fits with the prevailing division of responsibilities and occupational roles in traditional brick and mortar universities, in which course design is primarily a solo activity. The LMS was successful precisely because it didn’t disrupt the status quo. This design also ensured that the software could be used by a diverse set of users, thereby maximizing adoption across an institution - a benefit to both the institutional client and the vendor. Instructionally-agnostic software approach has its place. But over dependence on this approach has and will continue to seriously hamper our efforts to improve the quality and cost-effectiveness of online learning. By relying on instructionally-agnostic software, higher education limits itself to the "correspondence" model of distance education - in which we use software primarily as a cost-effective tool for distributing traditional education experiences - based on print and classroom conventions. We need to complement the agnostic model with software that purposely embodies the best instructional strategies. Instructionally intelligent software does what software does best: it augments and extends our capacity. It enables the educator to do what she can’t currently do given constraints of time, money or skill. Consider, for example, the benefits of providing students with frequent opportunities to apply knowledge and to receive immediate feedback. Building and facilitating courses with hundreds of opportunities for practice and providing feedback, immediate or otherwise, is beyond the capacity of most institutions to support. However, educational software can be designed to offer students frequent opportunities for formative assessments and meaningful feedback within a single course. The software captures and embodies our best understanding of what constitutes an effective learning experience, and puts this knowledge to use in a cost-effective way. When done well, it significantly extends our capacity as educators.
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:47pm</span>
Question: What is Acrobatiq’s take on adaptivity in online courseware? Answer: We view adaptivity as just one application of a broader discipline - data-driven learning.   Critical aspects on our views of data-driven learning fall into three areas: What data do we have (and how do we design learning to collect that data)? How do we model that data? What do with the information coming from our models to optimize learning? What data do we have?  We focus primarily on collecting and modeling learning data, which we distinguish from other forms of data that one might work with in an educational setting Learning Analytics: what are students learning?  What level of mastery against the learning outcomes of the course have they achieved at any moment in time? Engagement Analytics:  what are students doing?  Are they logging in?  When?  What are they clicking?  How engaged do they appear based on their behavior? Institutional Analytics: based on high-level results data, demographic data, and models, what do we know about the effectiveness of our courses, curricula, student support services, etc.? More information on these types of data analytics: Read more: Analytics in Online Higher Education: Three Categories  Watch the webinar: Q&A Panel Webinar: Essentials of Analytics in Online Higher Education How do we model that data? Every student interaction with Acrobatiq activities is recorded, whether the activities are summative checkpoints or practice questions. This data informs our model about changes in learning tied to specific skills or knowledge components related to those activities. By using sophisticated statistical modeling in conjunction with sound principles of cognitive science, we can model their learning on learning objectives and even course-wide competencies.  This allows us both to identify weak points for students and to give us the ability to target specific misconceptions.  We think it is essential to have a precise and nuanced view of a student’s learning state relative to the desired outcomes if we are going to drive actions based on that data.   What do we DO with that data?  (adapt the learning environment amongst other things) Once we have a clear view of a student’s learning state relative to the desired course outcomes, we can leverage that data (often in real time) to improve learning outcomes and to create efficiencies in the teaching process.  Examples of the kinds of things we do with that data include: Learning dashboards for faculty and mentors Allowing teachers to see at a glance where to focus their efforts in the classroom or intervene with individuals or with small groups that need similar instruction Dashboards for learners Giving insight to students on their progress in the course allowing, them to optimize their time spent Recommendations for learners Based on those insights, providing guidance to learners on optimizing their experience by letting them know when it’s time to move on and where to focus their attention Adapt the environment to the learner Using carefully designed questions that synthesize a number of objectives, we present a small scenario to students.  A student who we believe has a good understanding of all of the component objectives may be presented with simple a couple of questions to validate their learning, before being encouraged to move on.  If, on the other hand, we are presenting to a student that has struggled in some component skills, we scaffold that student up front within the context of the scenario by presenting additional questions targeted to their learning needs.  This is different from scaffolding a problem simply based on the student giving a wrong answer.  We optimize the experience to the student need.
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:47pm</span>
  To recruit Dr. Jennifer West, a researcher at Rice University, Duke University had to pay for an entire team of researchers that worked with the Professor, as well as much of their equipment. Dr. West told NPR, "They actually sent architects to Rice (University) who looked at our lab facilities there, then used that information to go back and design the facility that would work for us at Duke."[1] Imagine a university making the same effort to secure the best instructional experience for its online students. Imagine institutions feverishly seeking out the best available digital learning content; recruiting the best instructional designers and the best instructors; the ones that have been proven to improve learning outcomes.  Imagine if the Chief Academic Officer’s career security was dependent on her success in this regard. Given the centrality of teaching and learning to the institution, this shouldn’t sound absurd; the problem, of course, is that it does. [1] Lisa Chow. Duke: 60,000 a Year for College is Actually a Discount, NPR. February 21, 2014. . http://www.npr.org/blogs/money/2014/02/14/277015271/duke-60-000-a-year-for-college-is-actually-a-discount Retrieved May 12 2014.  
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:46pm</span>
We have good reason to be excited about online higher education; it has the potential to reconfigure "how we do" higher education in ways that will improve both the quality and efficiency of learning. However, talk of online education’s current "transformative", "revolutionary" and (worst of all) "disruptive" impact on traditional higher education ignores the fact that one of the core components of online higher education has improved only imperceptibly during last two decades. Online instructional content - the instructional media and activities presented to students - remains largely stuck in the 1990s, barely scratching the potential that we once imagined. The problems with instructional content stem from how courses are created. In most non-profit colleges and universities, the responsibility for the design and development of instructional content continues to fall to under-resourced and typically ill-prepared individual faculty members. The service departments set up in most institutions to support online learning have not substantially changed this fundamental division of labor. Instructional design professionals in these departments - despite their skills - are forced into secondary roles, often pushed toward providing technical training ("How to Set Up Quizzes in Blackboard"), rather than actually working with course instructors to design instruction. The funds available for course development are severely limited to what can be reasonably generated by way of tuition revenue (minus direct expenses) over a few semesters. And incentive systems of traditional colleges and universities make it illogical for faculty to spend excessive time developing instructional content, even if they had the wide range of skills necessary for this kind of work. There are exceptions to this state of affairs, but too few. As a result, online instructional content is often hastily constructed, relies heavily on exposition, and is largely void of the applications that actually take advantage of the unique properties of digital technology. Most discouraging of all, course design is often not based on research about how students actually learn (i.e. the science of learning). The process is neither rigorous, nor ambitious. You’d be hard-pressed to find another sector or institution in which content is both so obviously important and yet given so little attention. (Ironic, given the primacy of evidence-based practice in all other corners of the academy.) A number of changes are required if we are to begin to truly take advantage of potential of instructional content in online higher education. Some of the more obvious, such as team-based design and faculty development have been discussed elsewhere. Below, I offer three additional pieces of the puzzle that will help us get to the next stage of instructional content. One. (Finally) take course design seriously. To recruit Dr. Jennifer West, a researcher at Rice University, Duke University had to pay for an entire team of researchers that worked with Dr. West, as well as much of their equipment. Dr. West told NPR, "They actually sent architects to Rice (University) who looked at our lab facilities there, then used that information to go back and design the facility that would work for us at Duke." Imagine a university making the same effort to secure the best instructional experience for its online students. Imagine institutions feverishly seeking out the best available digital learning experience; recruiting the best instructional designers and the best instructors; the ones that have been proven to improve learning outcomes. Imagine if the Chief Academic Officer’s career security was dependent on her success in this regard. Given the centrality of teaching and learning to the institution, this shouldn’t sound absurd; the problem, of course, is that it does. Two. In the digital world, "design" matters. "Design" here refers to graphic design, industrial design, user experience, and the like. Design has taken on a bigger role in Western societies as a means of mitigating the jarring effect of rapid change. Nowhere is this more obvious than in the realm of personal technology (e.g. apps, tablets, mobile devices). However, online higher education has managed to remain untouched by good design. Its absence in online higher education is a significant inhibitor of the quality of learning: when we move the locus of education from the classroom to the digital environment, we necessarily change the factors that determine the quality of the student’s experience. In the digital environment, design plays a far more important role than it does in the classroom. "Screens" (laptops, smartphones, tablets, etc.) are design-dependent. The quality of design in screen-based environments dramatically influences the end-user’s experience. Well-designed instructional content and interfaces are easier to use and thus, more efficient. But the value of great design goes further: it can, like a great educator, direct the students’ attention to what is most important, increase the amount of time that a user is willing to spend on a particular challenge (i.e. time on task), build on a user’s existing knowledge, provoke a positive emotional response (which can facilitate better learning), and make information memorable and "sticky". Three. We need to know what’s working. A greater commitment to measuring the effectiveness of learning may be the single greatest driver of innovation in higher education in 2014. Better information about learning outcomes in the hands of educators, regulatory bodies, university leadership, employers, and (especially) students will uncover what’s working and what isn’t. But it will also drive innovation by providing both the impetus for change and the data we need to validate new approaches. Despite the relative ease with which we can capture student activity in online education, most institutions have only begun to take advantage of this opportunity. As of 2014, most analytics in higher education - to the extent that it is used for instructional purposes at all - tends to be limited to "engagement" analytics, which tells us what pages, assignments, and so forth that the student has visited. It tells us that the student is "there or not" (i.e. engaged). What it doesn’t tell us is how well the student has mastered specific aspects of the curriculum - which is the realm of true learning analytics. With learning analytics, educators, institutions, and students have the information required to build and test new instructional approaches and to know when they’re working. Great instructional content and learning analytics are natural allies. When tied to instructional content, analytics provides the intelligence that can determine, for example, which content should be served up to students based on their past performance (adaptive), predict the kinds of support that students may need to master specific parts of the curriculum, and help faculty identify the most effective types of instructional content. If the goal of the institution is to help students learn, and the power we assign to these institutions to act as gatekeepers is justified, it is logical (if not a social imperative) that the institution become extremely good at measuring learning. It’s been almost two decades since we got started with online learning. Since then, we’ve made some significant progress. But for a variety of reasons, the level of innovation and quality of course design has fallen behind. It’s time we began to take fuller advantage of the opportunities for improved learning that it affords. Sources Lisa Chow. Duke: 60,000 a Year for College is Actually a Discount, NPR. February 21, 2014. . http://www.npr.org/blogs/money/2014/02/14/277015271/duke-60-000-a-year-for-college-is-actually-a-discount Retrieved May 12 2014. Learning analytics has a quietly subversive quality. Increased measurement of learning outcomes will challenge the existing hierarchy of institutions. Those institutions at the top of the pyramid have nowhere to go but down, and the traditional means by which we’ve measured quality in higher education has worked in their favour. See Lloyd Armstrong. Competitive Higher Education, Changing Higher Education. http://www.changinghighereducation.com/2006/03/competitive_hig.html Retrieved May 12 2014 :: Dr. Keith Hampson is Managing Director, Client Innovation at Acrobatiq, a Carnegie Mellon University venture born out of CMU’s long history in cognitive science, human-computer interaction, and software engineering. @Acrobatiq
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:46pm</span>
Colgate University hosted an event last week. "Innovation + Disruption Symposium". The keynote was none other than Clay Christensen, the Godfather of Disruption Innovation. Following his talk, seven University Presidents of private universities fielded questions from the moderator. Normally, these events are dull. Dreadful, in fact. Few things can  more dampen the likelihood that someone will say something provocative or new than their holding the position of university president. It’s a job for the politically astute. So, at best, these events make for good background noise while you’re working. But a few moments stood out; here’s my favourite. Clay Christensen - having done his bit on disruptive innovation in higher education - was sitting in the audience listening to the Presidents field questions. He rose and noted, half-jokingly, that everyone on the panel disagreed with everything he had just said about the coming disruption in higher education. Laughs ensued. He then asked the panel to imagine that a second panel was  on stage with them. This second panel included the Founder of the Khan Academy, Paul Leblanc of Southern New Hampshire University, the President of Western Governors University, and others that are commonly believed to be leading the changes that are unfolding in higher education. Christensen asked the panel to consider what this second, imaginary panel might say that is different from what he had been hearing. The response? David Oxtoby, President of Pomona College fielded the question: "I actually don’t know any of the people you just mentioned, but . . . " You can’t make this stuff up. I don’t recall ever hearing or reading anything that so succinctly illustrates the existence of different worlds and perspectives within higher education. To fully appreciate the vastness of the space between these worlds, simply consider the context: In 2014 access to quality, low-cost higher education is considered a national imperative. Improving graduation rates are part of the Obama platform. Student debt recently surpassed credit card debt. Think tanks, research groups, philanthropic foundations, and government initiatives are looking for ways to leverage educational technology. Even the mainstream media - after having  long ignored higher education  - are talking about the need for greater innovation in higher education. Yet, the President of a university - having accepted an invitation to an event to talk about disruptive innovation in higher education - has not heard of many of the people behind these changes. Wow. Just wow. I’ve written before about the growing tendency of professionals in higher education to exist blissfully unaware of the goings-on of other parts of the sector. At the time, I was concerned I might have overstated the case.  I’m no longer concerned. Note: Quotes can be misleading and can be easily taken out of context. But my interpretation of the significance of the exchange was based on the President’s full response to the question — not the quote alone  — which strongly suggested to me, and a colleague with whom I conferred  — that the respondent wasn’t able to speak to the work that these "imaginary" panel members were doing. :: :: :: Dr. Keith Hampson is Managing Director, Client Innovation at Acrobatiq, a Carnegie Mellon University venture born out of CMU’s long history in cognitive science, human-computer interaction, and software engineering. @Acrobatiq
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:45pm</span>
Our very own Chief Learning Scientist, Marsha Lovett, is included in a The Trib’s recent article on the Evolution of Higher Education by Debra Erdley. Marsha, who is also Carnegie Mellon’s Director of the Eberly Center for Teaching Excellence and Educational Innovation, and a Teaching Professor in the Department of Psychology, helped us create courses for Western Governors using decades of learning science from Carnegie Mellon University’s Open Learning Initiative (OLI). You can read the full article here. For more resources on learning science view Marsha’s recent webcast, Unlocking Learning through Cognitive Science.
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:45pm</span>
Working with its ad agency, Leo Burnett, Heinz Ketchup changed the criteria of what constitutes good ketchup and, as a result, increased its market share. They defined the product’s relatively thick quality as the new ideal. The difficulty of getting the ketchup out of the bottle became a marker of quality. The criteria we use to evaluate different aspects of our world can have an immense impact. Different criteria encourage different behaviours and outcomes, whether we’re talking about something as banal as ketchup or more substantial matters such as how we evaluate political leaders (e.g. great at speeches, but no experience) or what constitutes great higher education. In most sectors, evaluations strike a balance between the issues of cost and quality. Both concepts can be interpreted in a number of ways, of course: cost can include the price paid, but also the "cost" of accessing the product or service. Quality can focus on the experience of using the product/service, but also the ways in which ownership of the product/service will position the user in social circles. Higher education is unique in this regard. The tendency in this sector is to focus almost exclusively on issues pertaining to quality and to downplay the relevance of cost. This is unfortunate. The primary focus on quality sounds ideal, but it serves to dramatically limit the range of options that higher education offers students. Seeking out innovations that offer new combinations of cost and quality will increase the diversity of solutions available. The objective should be to maximize value; to find new ways to improve the balance between quality and cost.  A wider range of options are needed at various price points. "Free, as in cost-free" Neglect of price is evident in criticism of MOOCs. This comment followed The Economist recent articles on online higher education (See "Creative Destruction" and "The Digital Degree"): "I’m a prof at a mid-sized Canadian university. Plenty of my students have told me that they’ve looked at the MOOC recorded lectures from MIT and Harvard, and they weren’t any better than what we offered." A direct comparison of traditional online higher education and MOOCs based solely on quality is insufficient for determining the significance of this new format and business model — because, as everyone knows, MOOCs are free (or close to free).  The dramatically greater scale (number of end-users) of the MOOC model provides economies which in turn allow for higher quality relative to cost (i.e. value). Consequently, it should be of no solace to our Canadian academic that his courses are as good. The fact that this isn’t obvious reflects the tendency to downplay the relevance of cost. It’s also naive. "She got into a good school . . . " The tendency to only think in terms of quality is also evident in the way in which we compare institutions. Students are told that Yale, Vassar and other selective institutions are "great institutions" that are better than state universities, which  in turn are better than community colleges, and so on. The fact that tuition levels between different institutions can vary by as much as 1500% is downplayed. While everyone understands the importance of relative price on an intellectual level, we continue to reinforce this odd evaluation scheme year after year. In any other sector of the economy, this logic would seem bizarre. This accepted logic and criteria helps to reinforce the tendency of institutions to move in unison toward a singular notion of excellence. Clayton Christensen, and others, have defined this as the Harvard DNA - the guiding North Star of higher education — that encourages universities to gravitate toward the selective research university model. We see this same logic in play when colleges seek to transform themselves into universities, and when mid-tier schools like George Washington University and NYU use dramatic increases in tuition fees to signal "excellence". (See Daniel Luzer’s excellent article, The Prestige Racket.) The narrow definition of value can also suppress innovations in higher education that offer excellent value at far lower prices. Straighterline pioneered a business model that allows students to pay low monthly fees for ACE-accredited courses as they begin their college and university careers. While the company has managed to succeed, their approach challenges orthodoxy in higher education that equates low-prices with bad education. But at these monthly tuition levels, the value is greater than what students would receive from most other education providers. A broader and more fluid evaluation scheme in higher education that considers value — not merely quality — will open the door to new instructional strategies, business models, and types of programs. :: :: Dr. Keith Hampson is Managing Director, Client Innovation at Acrobatiq, a Carnegie Mellon University venture born out of CMU’s long history in cognitive science, human-computer interaction, and software engineering. @Acrobatiq  
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:45pm</span>
"Economies of scale are factors that cause the average cost of producing something to fall as the volume of its output increases." One of the common and more provocative scenarios for MOOCs involves using these courses at multiple institutions. In other words, build once (e.g. by Harvard), use often (multiple institutions). The benefit of this approach, at least potentially, is that by sharing content across multiple institutions we drive down the cost of offering online courses and, second, the relatively high volume of users makes it possible to increase the level of investment in each course. San Jose State University’s attempt to integrate the Udacity course serves as a high profile example. This isn’t a new idea. People have been considering how scale might fit into higher education since the 1990s. It wasn’t until MOOCs that we had a concrete and well-known example to serve as focal point for this discussion. Higher education has not put a great deal of effort into finding ways to reduce costs through scale. While other sectors seized the new economics of digital content storage and distribution, higher education has continued to produce and use digital instructional content locally; a digital cottage industry model, of sorts. For elite institutions or those aspiring to be elite, scale runs counter to one of the trappings of elite status: exclusivity. For others, the highly decentralized organization of the institutions makes institution-wide scale improbable. Another factor is concern amongst decision-makers about the impact of sharing courses on labour market value and autonomy. "Let’s not kid ourselves; administrators at the CSU are beginning a process of replacing faculty with cheap online education." San Jose State University faculty responding to the call to use Michael Sandel’s MOOC as part of their program. Just How Common Is It? Scale is directly tied to the degree to which curriculum is common or shared across institutions.  This component of scale hasn’t, to my knowledge, been sufficiently addressed. If a course is to "scale" there must significant commonalities across institutions of higher education of what they teach and how. It is implied in the Coursera/Udacity/edX business model — but surely never stated outright — that undergraduate curriculum in the US (and beyond) is sufficiently common. Commonality is the premise of the textbook industry, of course. Certain courses, particularly in the first and second year of programs,  can be satisfied by a single textbook across hundreds of institutions. US textbooks are used in other nations, in fact — only modified lightly to refer to local conditions. A highly provocative study by (what was then called) Coopers and Lybrand suggested that 25 individual courses constituted 80% of registrations. The concept of The Long Tail might provide a structured approach to understanding the degree to which courses are common and how this might impact costs. Chris Anderson, The Long Tail Chris Anderson’s "The Long Tail" (2004) contends that the Internet has fundamentally changed the economics of producing and distributing digital products. "Shelf space" on the Internet is virtually infinite and increasingly inexpensive. It’s now financially feasible for vendors to sell a much wider variety of digital products, particularly books, films, and music and other media. Marketing strategy is shifting from a dependence on a relatively limited number of "hits" or "blockbusters"  (e.g. Top 40 radio; New York Times bestseller lists) to serving niches. Anderson argues that consumers have historically purchased "hits" — not because they are indifferent to less popular fare, but because of a lack of choice. But the Internet is now removing the bottleneck between suppliers and consumers. And as search and distribution technologies improve, and costs continue to decrease, Anderson forecasts that the top sellers in a variety of markets will constitute a smaller share of total sales, and the number of different products available will increase dramatically (i.e. further flattening and lengthening of the distribution of sales). While the top-rated television shows often captured more than 50% of the audience in the 1960s, today they represent  only10% to 15% and declining. New business models like Netflix have only amplified the shift. The Endurance of "Hits" and Digital Content in Higher Education Several studies have suggested that Anderson’s original work overstated the degree to which people migrated away from a select number of "hits". Anita Elberse’s analysis, Should You Invest in the Long Tail?  (Harvard Business Review), suggests that the market for "blockbusters" remains largely safe from the onslaught of multiplying niche markets. Despite the changing economics of content authoring and distribution that Anderson describes, the bulk of sales are still found in the "head" and the "tail" is remarkably flat.  Greater variety is not always matched by demand. The degree to which Elberse’s argument invalidates the Long Tail theory is somewhat dependent on where we draw the line between the "head" and the "tail"; that is to say, what we think constitutes a "hit". What’s most useful about her work is that it reminds us that there are important forces at play in each sector that give shape to the distribution of sales (head and tail). The insights from Anderson, and the questions posed about these insights by Elberse, can help us understand and anticipate the factors that might the demand for a more diverse range of content in higher education? Do we have a preference or need for "hits" in higher education? As a starting point for addressing these questions, I offer three issues that may effect the length of the "tail" of content in digital higher ed: Quality (Re)assurance. Do we need assurances from others in the field about the quality of content, and from whom, exactly? There are conventions in place: In traditional textbook publishing, it is conventional to employ currently employed academics from well-known (preferably) institutions of higher education as authors. In OER, we find the use of simple rating systems, such as stars (one-to-five), to crowd-source evaluations. To what degree will the need for assurance from others limit the expansion of the "tail"? Consistent and Coherent Curriculum. To what extent must the content be consistent with the curriculum within the institution and other institutions? Although not to the same degree as K12, higher education is a "system" in which students progress, step-by-step. There are levels into which content must fit. When a student moves from first year to second year, or transfers from one school to another, there is an assumption (hope?) that the first year accounting course at University A is roughly equivalent to the same course at University B.  The Bologna Process is relevant here. Production Quality. How important is it to educators and students that the content that they use meet a minimum standard of production quality? That is to say, at what point does "home-made" content become a liability because it is either difficult to integrate into an LMS, "buggy" (in the case of content embedded in applications), or simply difficult to use for students and instructors? How far along the "tail" will content of sufficient quality be found? As the variety of content increases in the coming years, educators, institutions and publishers may want to pay attention to these and other issues to determine how they go about creating, acquiring and distributing content. Although it’s too early to be certain, my suspicion is that like the markets of music, film, and books, the demand for "hits" in digital edu content will remain surprisingly strong. :: :: Note: A number of people have written about the relationship between The Long Tail and education. You’ll recognize, though, that some of them use the concept of the Long Tail to analyze the diversification of students. That is, the tail gets longer as more people participate in higher education.  Instead, I use the concept to analyze the diversity of educational content. Although the former approach may be of great value, my focus on educational content is more in line with Anderson’s original use of the concept.   Note 2: The concept of "mass customization" overlaps with The Long Tail in important ways. For more on more customization, see "How Technology Can Drive the Next Wave of Mass Customization" (McKinsey) Dr. Keith Hampson is Managing Director, Client Innovation at Acrobatiq, a Carnegie Mellon University venture born out of CMU’s long history in cognitive science, human-computer interaction, and software engineering. @keithhampson & @Acrobatiq  
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:44pm</span>
Questioning Lectures The old-school lecture is taking a thumping. In a world where more and more of our experiences are web-based and disconnected from location and time, the idea that we would find it logical to get students to get together in a single location at a specific time to hear a presentation seems increasingly odd. "Imagine", Donald Clark writes, "if a movie were shown only once. Or your local newspaper was read out just once a day in the local square. Or novelists read their books out once to an invited audience. That’s face-to-face lectures for you: it’s that stupid." Others focus more on the instructional value of lectures — regardless of the role of technology. Graham Gibbs: "More than 700 studies have confirmed that lectures are less effective than a wide range of methods for achieving almost every educational goal you can think of. Even for the straightforward objective of transmitting factual information, they are no better than a host of alternatives, including private reading. Moreover, lectures inspire students less than other methods, and lead to less study afterwards." So why do we hang on to lectures? Explanations of the persistence of lectures point to the usual suspects: the challenge of introducing new instructional strategies, the dictates of the physical space, class size, the difficulty (and possibly anxiety) about using more interactive instructional experiences, and more. Privileging the Original and One-of-a-Kind One factor that we may not have considered is how lectures fit into a broader cultural framework that privileges original and live events (or one-of-a-kind objects) over reproductions and technologically-mediated experiences. A line was drawn during Modernity between cultural practices and artefacts — such as paintings — that are original and one-of-a-kind — and reproductions of the original, made possible by technology. The original is highly valued; the reproduction, far less so. This basic distinction unfolds in different arenas in roughly the same fashion: One-of-a-kind artisan crafts v mass manufactured "crafts" Live music performances v recordings Paintings v photographic reproductions Haute couture fashion v "pret a porter" (or ready-to-wear) The increased capacity to make reproductions, according to theorists like Walter Benjamin of The Frankfurt School, served to reconfigure the meaning and value of both the original and the copy. The presence of ubiquitous copies can weaken the value of the original, but it still maintains a privileged status. The original has an "aura". (See "The Work of Art in the Age of Mechanical Reproduction", 1936.) The shift from lectures to digital higher education is not merely a migration from one instructional model to another, but a shift from a one-time, "original", live event or object to a recorded and reproducible event or object. As with art and other cultural artefacts and practices, the original is privileged. Visitors to The Louvre take photographs of The Mona Lisa, an image they only know through photographs. The distinctions often reveal themselves through language; the choice of words and the metaphors we use. Defenders of the lecture, like Mark Edmundson tell us that "Every memorable class is a bit like a jazz composition. There is the basic melody that you work with. It is defined by the syllabus. But there is also a considerable measure of improvisation against that disciplining background." The lecture, Abigail Walthausen explains, "is an art, and like other arts such as painting, musicianship and writing, it takes real dedication and many hours of practice to excel at." Clay Shirky rightly notes that defenders of lectures believe that face-to-face education is the only "real" education — everything else is a facsimile, at best. (Shirky proposed The MOOC Criticism Drinking Game: take a swig whenever someone says "real", "true", or "genuine" when questioning the value of MOOCs.) Given that we tend to privilege live/original experiences, it is understandable that academics would celebrate the live educational format — and want to protect their place within it. There are few professions that involve strapping on a microphone and speaking to large groups of people — sometimes hundreds at a time — on a daily basis. Fewer occupations, still, call for the professional to offer their own unique perspective on a topic. (The sacred but often questioned link between teaching and research is key here.) I wouldn’t be the first to identify the link between the identity of the academic and the archetype of the lone artist — an individual working doggedly on a personal project before presenting it to the world. Dr. Keith Hampson is Managing Director, Client Innovation at Acrobatiq, a Carnegie Mellon University venture born out of CMU’s long history in cognitive science, human-computer interaction, and software engineering. @Acrobatiq
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:44pm</span>
Working with it’s ad agency Leo Burnett, Heinz Ketchup changed the criteria of what constitutes good ketchup and, as a result, increased its market share. They defined the products’ relatively thick quality as the new ideal. The difficulty of getting the ketchup out of the bottle became a marker of quality. The criteria we use to evaluate different aspects of our world can have an immense impact. Different criteria encourage different behaviours and outcomes, whether we’re talking about something as banal as ketchup or more substantial matters such as how we evaluate political leaders (e.g. great at speeches, but no experience) or what constitutes great higher education. In most sectors, evaluations strike a balance between the issues of cost and quality. Both concepts can be interpreted in a number of ways, of course: cost can include the price paid, but also the "cost" of accessing the product or service. Quality can focus on the experience of using the product/service, but also the ways in which ownership of the product/service will position the user in social circles. Higher education is unique in this regard. The tendency in this sector is to focus almost exclusively on issues pertaining to quality and to downplay the relevance of cost. This is unfortunate. The primary focus on quality sounds ideal, but it serves to dramatically limit the range of options that higher education offers students. Seeking out innovations that offer new combinations of cost and quality will increase the diversity of solutions available. The objective should be to maximize value; to find new ways to improve the balance between quality and cost.  A wider range of options are needed at various price points. "Free, as in cost-free" Neglect of price is evident in criticism of MOOCs. This comment followed The Economist recent articles on online higher education (See "Creative Destruction" and "The Digital Degree"): "I’m a prof at a mid-sized Canadian university. Plenty of my students have told me that they’ve looked at the MOOC recorded lectures from MIT and Harvard, and they weren’t any better than what we offered." A direct comparison of traditional online higher education and MOOCs based solely on quality is insufficient for determining the significance of this new format and business model — because, as everyone knows, MOOCs are free (or close to free).  The dramatically greater scale (number of end-users) of the MOOC model provides economies which in turn allow for higher quality relative to cost (i.e. value). Consequently, it should be of no solace to our Canadian academic that his courses are as good. The fact that this isn’t obvious reflects the tendency to downplay the relevance of cost. It’s also naive. "She got into a good school . . . " The tendency to only think in terms of quality is also evident in the way in which we compare institutions. Students are told that Yale, Vassar and other selective institutions are "great institutions" that are better than state universities, which  in turn are better than community colleges, and so on. The fact that tuition levels between different institutions can vary by as much as 1500% is downplayed. While everyone understands the importance of relative price on an intellectual level, we continue to reinforce this odd evaluation scheme year after year. In any other sector of the economy, this logic would seem bizarre. This accepted logic and criteria helps to reinforce the tendency of institutions to move in unison toward a singular notion of excellence. Clayton Christensen, and others, have defined this as the Harvard DNA - the guiding North Star of higher education — that encourages universities to gravitate toward the selective research university model. We see this same logic in play when colleges seek to transform themselves into universities, and when mid-tier schools like George Washington University and NYU use dramatic increases in tuition fees to signal "excellence". (See Daniel Luzer’s excellent article, The Prestige Racket.) The narrow definition of value can also suppress innovations in higher education that offer excellent value at far lower prices. Straighterline pioneered a business model that allows students to pay low monthly fees for ACE-accredited courses as they begin their college and university careers. While the company has managed to succeed, their approach challenges orthodoxy in higher education that equates low-prices with bad education. But at these monthly tuition levels, the value is greater than what students would receive from most other education providers. A broader and more fluid evaluation scheme in higher education that considers value — not merely quality — will open the door to new instructional strategies, business models, and types of programs. :: :: Dr. Keith Hampson is Managing Director, Client Innovation at Acrobatiq, a Carnegie Mellon University venture born out of CMU’s long history in cognitive science, human-computer interaction, and software engineering. @Acrobatiq  
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:44pm</span>
"Economies of scale are factors that cause the average cost of producing something to fall as the volume of its output increases." One of the common and more provocative scenarios for MOOCs involves using these courses at multiple institutions. In other words, build once (e.g. by Harvard), use often (multiple institutions). The benefit of this approach, at least potentially, is that by sharing content across multiple institutions we drive down the cost of offering online courses and, second, the relatively high volume of users makes it possible to increase the level of investment in each course. San Jose State University’s attempt to integrate the Udacity course serves as a high profile example. This isn’t a new idea. People have been considering how scale might fit into higher education since the 1990s. It wasn’t until MOOCs that we had a concrete and well-known example to serve as focal point for this discussion. Higher education has not put a great deal of effort into finding ways to reduce costs through scale. While other sectors seized the new economics of digital content storage and distribution, higher education has continued to produce and use digital instructional content locally; a digital cottage industry model, of sorts. For elite institutions or those aspiring to be elite, scale runs counter to one of the trappings of elite status: exclusivity. For others, the highly decentralized organization of the institutions makes institution-wide scale improbable. Another factor is concern amongst decision-makers about the impact of sharing courses on labour market value and autonomy. "Let’s not kid ourselves; administrators at the CSU are beginning a process of replacing faculty with cheap online education." San Jose State University faculty responding to the call to use Michael Sandel’s MOOC as part of their program. Just How Common Is It? Scale is directly tied to the degree to which curriculum is common or shared across institutions.  This component of scale hasn’t, to my knowledge, been sufficiently addressed. If a course is to "scale" there must significant commonalities across institutions of higher education of what they teach and how. It is implied in the Coursera/Udacity/edX business model — but surely never stated outright — that undergraduate curriculum in the US (and beyond) is sufficiently common. Commonality is the premise of the textbook industry, of course. Certain courses, particularly in the first and second year of programs,  can be satisfied by a single textbook across hundreds of institutions. US textbooks are used in other nations, in fact — only modified lightly to refer to local conditions. A highly provocative study by (what was then called) Coopers and Lybrand suggested that 25 individual courses constituted 80% of registrations. The concept of The Long Tail might provide a structured approach to understanding the degree to which courses are common and how this might impact costs. Chris Anderson’s "The Long Tail" (2004) contends that the Internet has fundamentally changed the economics of producing and distributing digital products. "Shelf space" on the Internet is virtually infinite and increasingly inexpensive. It’s now financially feasible for vendors to sell a much wider variety of digital products, particularly books, films, and music and other media. Marketing strategy is shifting from a dependence on a relatively limited number of "hits" or "blockbusters"  (e.g. Top 40 radio; New York Times bestseller lists) to serving niches. Anderson argues that consumers have historically purchased "hits" — not because they are indifferent to less popular fare, but because of a lack of choice. But the Internet is now removing the bottleneck between suppliers and consumers. And as search and distribution technologies improve, and costs continue to decrease, Anderson forecasts that the top sellers in a variety of markets will constitute a smaller share of total sales, and the number of different products available will increase dramatically (i.e. further flattening and lengthening of the distribution of sales). While the top-rated television shows often captured more than 50% of the audience in the 1960s, today they represent  only10% to 15% and declining. New business models like Netflix have only amplified the shift. The Endurance of "Hits" and Digital Content in Higher Education Several studies have suggested that Anderson’s original work overstated the degree to which people migrated away from a select number of "hits". Anita Elberse’s analysis, Should You Invest in the Long Tail?  (Harvard Business Review), suggests that the market for "blockbusters" remains largely safe from the onslaught of multiplying niche markets. Despite the changing economics of content authoring and distribution that Anderson describes, the bulk of sales are still found in the "head" and the "tail" is remarkably flat.  Greater variety is not always matched by demand. The degree to which Elberse’s argument invalidates the Long Tail theory is somewhat dependent on where we draw the line between the "head" and the "tail"; that is to say, what we think constitutes a "hit". What’s most useful about her work is that it reminds us that there are important forces at play in each sector that give shape to the distribution of sales (head and tail). The insights from Anderson, and the questions posed about these insights by Elberse, can help us understand and anticipate the factors that might the demand for a more diverse range of content in higher education? Do we have a preference or need for "hits" in higher education? As a starting point for addressing these questions, I offer three issues that may effect the length of the "tail" of content in digital higher ed: Quality (Re)assurance. Do we need assurances from others in the field about the quality of content, and from whom, exactly? There are conventions in place: In traditional textbook publishing, it is conventional to employ currently employed academics from well-known (preferably) institutions of higher education as authors. In OER, we find the use of simple rating systems, such as stars (one-to-five), to crowd-source evaluations. To what degree will the need for assurance from others limit the expansion of the "tail"? Consistent and Coherent Curriculum. To what extent must the content be consistent with the curriculum within the institution and other institutions? Although not to the same degree as K12, higher education is a "system" in which students progress, step-by-step. There are levels into which content must fit. When a student moves from first year to second year, or transfers from one school to another, there is an assumption (hope?) that the first year accounting course at University A is roughly equivalent to the same course at University B.  The Bologna Process is relevant here. Production Quality. How important is it to educators and students that the content that they use meet a minimum standard of production quality? That is to say, at what point does "home-made" content become a liability because it is either difficult to integrate into an LMS, "buggy" (in the case of content embedded in applications), or simply difficult to use for students and instructors? How far along the "tail" will content of sufficient quality be found? As the variety of content increases in the coming years, educators, institutions and publishers may want to pay attention to these and other issues to determine how they go about creating, acquiring and distributing content. Although it’s too early to be certain, my suspicion is that like the markets of music, film, and books, the demand for "hits" in digital edu content will remain surprisingly strong. :: :: Note: A number of people have written about the relationship between The Long Tail and education. You’ll recognize, though, that some of them use the concept of the Long Tail to analyze the diversification of students. That is, the tail gets longer as more people participate in higher education.  Instead, I use the concept to analyze the diversity of educational content. Although the former approach may be of great value, my focus on educational content is more in line with Anderson’s original use of the concept.   Note 2: The concept of "mass customization" overlaps with The Long Tail in important ways. For more on more customization, see "How Technology Can Drive the Next Wave of Mass Customization" (McKinsey) Dr. Keith Hampson is Managing Director, Client Innovation at Acrobatiq, a Carnegie Mellon University venture born out of CMU’s long history in cognitive science, human-computer interaction, and software engineering. @keithhampson & @Acrobatiq  
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:44pm</span>
:: :: "Worth Reading" is a hand-picked weekly collection of new, not-so-new articles and downright old ideas, events and other items for higher education professionals. 1 A solid introduction to the concept of bundling/unbundling as it relates to higher education. Unbundling And Re-bundling In Higher Education "As a technology matures, however, it eventually overshoots the raw performance that many customers need. As a result, new disruptive innovations emerge that are more modular—or unbundled—as customers become less willing to pay for things like power and increased reliability but instead prioritize the ability to customize affordably by mixing and matching different pieces that fit together according to precise standards." 2 I love a good analogy. How College is Like Sunscreen "College students are paying more. They are taking on more debt. They are accepting worse jobs after they graduate and earning less than they did just five years ago. So how could it possibly be true that college is more important than ever? The answer is sunscreen." 3 This next piece was written in 1995. Interesting predictions about the future of online higher education. Electronics and the Dim Future of the University "Thus, while new communications technologies are likely to strengthen research, they will also weaken the traditional major institutions of learning, the universities. Instead of prospering with the new tools, many of the traditional functions of universities will be superseded, their financial base eroded, their technology replaced and their role in intellectual inquiry reduced. This is not a cheerful scenario for higher education." 4 In particular, take a look at idea five, "Software Will Eat the World". The Man Who Makes the Future: Wired Icon Marc Andreessen "Andreessen believes that enormous technology companies can now be built around the use of hyperintelligent software to revolutionize whole sectors of the economy, from retail to real estate to health care."  
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:44pm</span>
What makes Acrobatiq courses unique in the marketplace?  Some say it’s the evidence-based approach to learning and course design, based on ten years of research from CMU’s Open Learning Initiative, others say it’s The Learning DashboardTM.  We know that it’s both, and the fact that we never stop iterating. This summer we are releasing newly developed features in a private beta period with select customers.  These new features are being rolled out on a course-by-course basis this summer for use in the fall of 2014. With our latest update, we have released two new major pieces of functionality that represent a giant leap forward in enabling faculty to gain insights into the work their students are doing and more importantly, how that work impacts learning.   Learning Dashboard™ Using a customizable, question-based interface,  instructors can quickly gain insight into student learning.  Previously, an instructor only had access to learning information broken up in one predefined way.  However, we know that learning is only one piece of the puzzle.  When that information is related to work, instructors can much more easily identify students who are at risk and tailor the intervention based on how much work they are (or are not) completing in the course.  As users create tailored views (such as identifying a group of students to track, or based on schedule) they can save their favorite views.  For example, the image below is a snapshot of students’ learning states since the start of the course.  Now, our engagement and learning data allow the user to understand the relationship between a students’ work and their learning progress, which they can tailor to the method of instruction they are using (standard class, flipped classroom, competency-based mentoring, etc.). View Student Work For the first time, instructors can see all of the formative practice work their students have done (as well as quiz and homework) just as the student sees it.  This includes any recommendations or achievements students accomplish as they work.  Besides providing insight at the individual student level, this facilitates communication among teachers and students who connect remotely. Instructors will finally have the ability to specifically target their online students who are at-risk and deliver tactical intervention.
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:44pm</span>
Nick Di Nardo, host of "Meet Education" interviews Dr. Keith Hampson of Acrobatiq.  
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:44pm</span>
"Worth Reading" is a hand-picked weekly collection of new, not-so-new articles and downright old ideas, events and other items for higher education professionals.  1 Measuring Innovation in Education By OECD "The ability to measure innovation is essential to an improvement strategy in education. Knowing whether, and how much, practices are changing within classrooms and educational organisations, how teachers develop and use their pedagogical resources, and to what extent change can be linked to improvements would provide a substantial increase in the international education knowledge base. Measuring Innovation in Education offers new perspectives to address this need for measurement in educational innovation through a comparison of innovation in education to innovation in other sectors, identification of specific innovations across educational systems, and construction of metrics to examine the relationship between educational innovation and changes in educational outcomes." 2 Thoughts on Innovation in Higher Education: Sorting the Revolutionary Change from the Merely Cosmetic By Peter Bryant "What became clear to me after presenting these opinions in a number of places is that there is an accepted and arguably melodramatic narrative that MOOCs will change the world, the education has already passed a tipping point, weak brands will die and strong brands will survive, just like the music industry. Anyone who argues against this is misinformed, ignorant or an idealist pining for the days gone by. And it is easy to portray those who disagree with you as naysayers, luddites or people who just don’t get it. Now, this is not a universal set of behaviours. I have had some engaging and pragmatic debates with MOOC players, and both our understandings are better for it." 3 Five Ways to Maximize Your Investment in Adaptive Learning By Eduventures "If you’re considering implementing adaptive learning at your institution, here are five things to consider based on Eduventures research and recent interviews from our latest report, Maximizing Investment in Adaptive Learning." 4 Competency vs. Mastery by John Ebersole "On close examination, one might ask if competency-based education (or CBE) programs are really about "competency," or are they concerned with something else? Perhaps what is being measured is more closely akin to subject matter "mastery." The latter can be determined in a relatively straightforward manner, using various forms of examinations, projects and other forms of assessment. However, an understanding of theories, concepts and terms tells us little about an individual’s ability to apply any of these in practice, let alone doing so with the skill and proficiency which would be associated with competence. Deeming someone competent, in a professional sense, is a task that few competency-based education programs address. While doing an excellent job, in many instances, of determining mastery of a body of knowledge, most fall short in the assessment of true competence." 5 Why Do Americans Stink at Math?  By Elizabeth Green "The Americans might have invented the world’s best methods for teaching math to children, but it was difficult to find anyone actually using them. It wasn’t the first time that Americans had dreamed up a better way to teach math and then failed to implement it. The same pattern played out in the 1960s, when schools gripped by a post-Sputnik inferiority complex unveiled an ambitious "new math," only to find, a few years later, that nothing actually changed. In fact, efforts to introduce a better way of teaching math stretch back to the 1800s. The story is the same every time: a big, excited push, followed by mass confusion and then a return to conventional practices."  
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:43pm</span>
"Worth Reading" is a hand-picked weekly collection of new, not-so-new articles and downright old ideas, events and other items for higher education professionals. We think they’re worth reading. 1 Tottering Ivory Towers  By Stuart Butler "The higher education business should look to earlier episodes of technological tumult to gauge its future Low-cost ventures of so-so quality also pose a potentially devastating threat by undermining cross-subsidies in a traditional business model. Website advertising and Craigslist were deadly to the economics of newspapers because experienced journalists and news bureaus need cross subsidies to survive, just as full-service hospitals do. The reason why getting a few stitches in the ER can cost a small fortune is that ER procedures make possible high-quality care in low-revenue generating areas such as pediatrics. That, in turn, is why the growth of walk-in clinics and other providers offering low prices for low-cost services is such a threat to big hospitals. The breakup of such cross-subsidized services is often referred to as "unbundling", and it is a worrying phenomenon for "full-service" providers in any industry. This is precisely what we are seeing in higher education." 2 The feds tried to rate colleges in 1911. It was a disaster By Libby Nelson "Somewhere in the US Education Department, statistical experts and policymakers are at work on a highly controversial idea: a federal system to rate colleges based on their quality, much as Consumer Reports rates refrigerators. Many colleges hate this idea, and it turns out the uproar is nothing new. The forerunner of the modern Education Department tried a similar idea in 1911. At the time, colleges opposed the federal quality ratings so bitterly that two American presidents eventually intervened to halt their publication." 3 The Future of College? "A brash tech entrepreneur thinks he can reinvent higher education by stripping it down to its essence, eliminating lectures and tenure along with football games, ivy-covered buildings, and research libraries. What if he’s right? The paradox of undergraduate education in the United States is that it is the envy of the world, but also tremendously beleaguered. In that way it resembles the U.S. health-care sector. Both carry price tags that shock the conscience of citizens of other developed countries. They’re both tied up inextricably with government, through student loans and federal research funding or through Medicare. But if you can afford the Mayo Clinic, the United States is the best place in the world to get sick. And if you get a scholarship to Stanford, you should take it, and turn down offers from even the best universities in Europe, Australia, or Japan. (Most likely, though, you won’t get that scholarship. The average U.S. college graduate in 2014 carried $33,000 of debt.)" 4 A return to the elephant of college pricing  By Dr. Lloyd Armstrong "Looking at the elephant of higher education pricing from a bit of a distance, we see that the strategy of raising published prices very rapidly and trying to mitigate resulting problems by providing aid - high tuition, high aid-  doesn’t work particularly well in the eyes of any constituency, and for some it appears to be a near catastrophe. NOBODY LIKES THE ELEPHANT! It may be time to begin to look at alternative pricing strategies."
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:43pm</span>
"Worth Reading" is a hand-picked weekly collection of new, not-so-new articles and downright old ideas, events and other items for higher education professionals. We think they’re worth reading. 1 Got Skills? Retooling Vocational Education (The Economist) Excerpt: "The university bubble is also beginning to burst. Democratising universities has proved an expensive and inefficient way of providing mass higher education. Americans, who led the way, have taken on more than $1 trillion in student debt. But a growing number think that they got poor value for money—taught by PhD students not professors, forced to subsidise expensive research programmes and administrative cadres, and provided, at the end of it all, with a college diploma that no longer automatically brings a desirable job." "Frustration with the status quo is at last leading to a burst of innovation. The internet is well suited to vocational education: it helps reduce costs while making it easier to earn a living while doing some vocational training. Just as important is the birth of a new concept of what is being delivered." 2 Are Universities Going the Way of Record Labels? The Internet’s power to unbundle content and increase personal choice transformed the music industry—and it’s doing the same thing to higher education. Excerpt: "Students are the big winners here. Decreased cost of content combined with increased competition among professors, and lower average ROI for universities per professor, will lead to lower tuition costs and greater choice.Great professors with interdisciplinary knowledge—the great curators—will see license and royalty fees go up as they command economies of scale in distribution. Existing institutions with large endowments will become the record labels: platforms that invest in great talent. And distribution platforms that curate content will do well, commanding both economies of scale and scope." 3 Hire Education: Mastery, Modularization, and the Workforce Revolution Excerpt: "This book illuminates the great disruptive potential of online competency- based education. Workforce training, competency-based learning, and online learning are clearly not new phenomena, but online competency-based education is revolutionary because it marks the critical convergence of multiple vectors: the right learning model, the right technologies, the right customers, and the right business model. In contrast to other recent trends in higher education, particularly the tremendous fanfare around massive open online courses (MOOCs), online competency-based education stands out as the innovation most likely to disrupt higher education. As traditional institutions struggle to innovate from within and other education technology vendors attempt to plug and play into the existing system, online competency-based providers release learning from the constraints of the academy. By breaking down learning into competencies—not by courses or even subject matter—these providers can cost-effectively combine modules of learning into pathways that are agile and adaptable to the changing labor market."
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:42pm</span>
(Note: Part One in this series on the subject of online consortia can be found here.) :: In the late 90s — during the "early years" of online higher education — many colleges and universities didn’t have the internal resources required to build, support and market online education. Some institutions saw fit to join online consortia; by pooling limited resources, each institution gained access to the resources they needed. Many of these early initiatives are still operating.  Access and/or Innovation In a recent review we conducted of online higher ed consortia, we found that the majority of consortia, and the vast majority of those that started ten or more years ago, are designed primarily to increase access. That is, these initiatives define success by the number of online courses created and/or supported by the consortia, and the number of students enrolled in these courses. More courses, means more access. Access is obviously important. However, for a number of reasons, we believe that a recasting of the consortia model for online higher education would be beneficial. Given the state of online education, the focus needs now to shift from ensuring institutions can launch and support online courses, to stimulating innovation and improving quality. Time to Focus on Innovation First, and most obviously, the needs of member institutions have changed and consequently consortia need to change, as well. Over the last decade-and-a-half, most colleges and universities have significantly augmented their internal capacity to develop, support and market online education. The LMS is now near universal. The majority of university leaders see online education as fundamental to institutional strategy, and far more instructors have experience teaching online. moving beyond the basics . . . As internal capacity of member institutions increase, the functions that can’t be done well (or at all) within each member institution change too. Although this may seem so obvious as to be not worth mentioning, our review suggests otherwise. Many consortia we reviewed continue to provide only the basic requirements of creating and supporting online courses. One consortium, for example, simply assigned a single instructional designer to work with a lone instructor from the member institution to develop an online course. No meaningful quality standards are employed, the instructor isn’t even paid for the course development. Fewer and fewer institutions need these basic services. It isn’t surprising that our review found that institutions that have set more ambitious goals for online education are less interested in participating in consortia. Our review suggested that more consortia should focus less on providing basic, increasingly common, services and more on helping institutions test and scale more ambitious online learning strategies that can improve outcomes and drive down costs. If the fundamental value proposition of consortia is that it enables member institutions to do what they can’t done alone, then the initiative should be deliberately and systematically focusing on those functions that are anything but "basic". Services that fall into the category of "ambitious" in 2014 might include the development of rich media, the use of learning analytics, and the development of competency-based programs. why consortia . . . Consortia align particularly well with three trends in online higher education: A slow, but important migration to the software model of course development, in which upfront costs for course development are relatively high, but maintenance and distribution costs are marginal. By pooling resources, consortia can accommodate higher upfront costs and then coordinate distribution at scale. Growing use of analytics to inform and personalize learning. The more data is shared and compared across institutions, the greater its value. Again, consortia are well positioned to facilitate the proper movement of data-generated insights across institutions. Online education will continue to demand new, increasingly complex skills and knowledge that are not readily available within each institution. Consortia can serve as a central, shared source of talent and technology across individual institutions. defining ROI . . . Consortia need to define and then share clearer and more concrete objectives with member institutions. In particular, it would be useful for consortia to provide members with more robust assessments of the initiative’s ROI. If success is defined by the consortium (as noted above) by the number of online courses and students that are supported by the consortia, then members should be able to assess whether the cost of running the consortia is greater than the actual increase in enrolment and number of courses. ROI is always difficult in education, but consortia — given their frequently tenuous financial stability — may be less inclined to produce this kind of information. Member institutions should demand it. built to change . . . Lastly, consortia must be built to change. If, as suggested, the basic purpose and value proposition of the consortia is to do what member institutions can’t do separately, then the services offered must change as technology, costs, and objectives change. Again, this may seem obvious. But consortia struggle with change like other organizations. Nevertheless, the value proposition of consortia requires that they continually adjust their services to meet changing conditions. :: Dr. Keith Hampson is Managing Director, Client Innovation at Acrobatiq, a Carnegie Mellon University venture born out of CMU’s long history in cognitive science, human-computer interaction, and software engineering. @Acrobatiq Consortia typically offer a range of services for member institutions: Course registration and course registration systems Help desk (technology and/or administrative) for students Professional development for instructors Learning management systems Video conferencing (hardware, software and support) Webinar hosting and management (hardware, software and support) Sharing of online courses between institutions Instructional quality assessment and rubrics Development of new applications Multimedia development (instructional material) Market research services Instructor training on educational technology Instructional design Tutoring services (student online/phone) Learning object repositories Project management/coordination Marketing / clearinghouse of members courses and programs  
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:41pm</span>
"Worth Reading" is a hand-picked weekly collection of new, not-so-new articles and downright old ideas, events and other items for higher education professionals. 1 Got Skills? Retooling Vocational Education (The Economist) "The university bubble is also beginning to burst. Democratising universities has proved an expensive and inefficient way of providing mass higher education. Americans, who led the way, have taken on more than $1 trillion in student debt. But a growing number think that they got poor value for money—taught by PhD students not professors, forced to subsidise expensive research programmes and administrative cadres, and provided, at the end of it all, with a college diploma that no longer automatically brings a desirable job." "Frustration with the status quo is at last leading to a burst of innovation. The internet is well suited to vocational education: it helps reduce costs while making it easier to earn a living while doing some vocational training. Just as important is the birth of a new concept of what is being delivered." 2 Are Universities Going the Way of Record Labels?  "The Internet’s power to unbundle content and increase personal choice transformed the music industry—and it’s doing the same thing to higher education." "Students are the big winners here. Decreased cost of content combined with increased competition among professors, and lower average ROI for universities per professor, will lead to lower tuition costs and greater choice.Great professors with interdisciplinary knowledge—the great curators—will see license and royalty fees go up as they command economies of scale in distribution. Existing institutions with large endowments will become the record labels: platforms that invest in great talent. And distribution platforms that curate content will do well, commanding both economies of scale and scope." 3 Hire Education: Mastery, Modularization, and the Workforce Revolution "This book illuminates the great disruptive potential of online competency- based education. Workforce training, competency-based learning, and online learning are clearly not new phenomena, but online competency-based education is revolutionary because it marks the critical convergence of multiple vectors: the right learning model, the right technologies, the right customers, and the right business model. In contrast to other recent trends in higher education, particularly the tremendous fanfare around massive open online courses (MOOCs), online competency-based education stands out as the innovation most likely to disrupt higher education. As traditional institutions struggle to innovate from within and other education technology vendors attempt to plug and play into the existing system, online competency-based providers release learning from the constraints of the academy. By breaking down learning into competencies—not by courses or even subject matter—these providers can cost-effectively combine modules of learning into pathways that are agile and adaptable to the changing labor market."
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:39pm</span>
"Worth Reading" is a hand-picked weekly collection of new, not-so-new articles and downright old ideas, events and other items for higher education professionals. 1 Rethinking College PBS News Hour 2 The Professor-Less University Times Higher Education "Two radically contrasting emerging models of higher education in the US offer academics a very different deal, says Steven Ward." "Now that massive open online courses appear to have reached the downward slope of the ever-shifting global higher education reform "hype cycle", other models have emerged on the fringes of tertiary education that promise even more "disruptive innovation" - or at least a great deal of build-up and hand-wringing - in years to come. These models may fundamentally change the professoriate and the university as they have come to be known over the past almost 1,000 years. Or they may be relegated to the start-up dustbin and soon forgotten." 3 The Economic Price of Colleges’ Failures New York Times "Academically Adrift" called into question what college students were actually getting for their increasingly expensive educations. But some critics questioned whether collegiate learning could really be measured by a single test. Critical thinking skills are, moreover, only a means to an end. The end itself is making a successful transition to adulthood: getting a good job, finding a partner, engaging with society. The follow-up study, "Aspiring Adults Adrift," found that, in fact, the skills measured by the C.L.A. make a significant difference when it comes to finding and keeping that crucial first job." 4 Why Can’t OER Enjoy the Same Success as Open Source Software? Ed Surge https://www.edsurge.com/n/2014-09-03-opinion-why-can-t-oer-enjoy-the-same-success-as-open-source-software "Whereas "free" largely means "freedom" in the hacker world, for Wiley and many of OER’s strongest advocates, it has come to mean primarily "no cost." When more than 60% of students report forgoing at least some of their textbook purchases because of cost, such a focus is understandable. And undeniably, to this point the "freedom" that’s so central to open software has yet to transfer into large numbers of faculty engaged with open content development. This, then, brings us to the central disagreement: different views regarding OER’s virtue as a means of lowering content costs, which I see as a necessary but insufficient condition for its mainstream use."
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:39pm</span>
The second post on the subject of online consortia can be found here.) :: We heard a couple of months back about Unizin launching a consortium of large state universities to share course content, content systems, and analytics. More recently we learnt about Cal State Online’s decision to slow down its already tepid push into a system-wide collaboration to offer online courses. These initiatives join a long list of efforts to expand online education by sharing resources across institutions. The logic of building collaborations is infallible: joining forces can potentially bring down costs, reduce risk, access to better resources, stimulate innovation, ward off competition, and more. But there are more a few failed efforts to ward off any naive assumptions that academic collaborations — particularly those that concern shared courses — are fool proof. There are more than 50 consortia in North America and almost as many types. There are right ways to do it and wrong. Having had a chance to review online consortia recently for a client, I want now to share a few observations in a handful of posts, beginning with "known obstacles" of course sharing initiatives. These obstacles are not insurmountable. But like any undertaking, it helps to know where the potholes are before you set on your journey. one . . . Most online consortia that include course sharing agreements were created in the 1990s; a time when many institutions had yet to make substantial investments in online learning  (e.g. technology, student support systems, professional development for faculty) and/or had yet to develop a sufficient level of confidence to carry out the functions required. As of 2014, the vast majority of universities have internal resources in place to support online education. The capacity to scale-back these investments is often difficult, owing to labor agreements and established practices. Nor do I suspect many institutions would chose to scale back at this point — given the growing strategic importance of online education to institutions. This, possibly more than any other factor,  could dampen enthusiasm among institutions for participation in shared course delivery models. two . . . Many course sharing initiatives target large enrollment, foundational courses because, first, these courses appear to offer the greatest possible savings and, second, because the curriculum is thought to be relatively generic. But of course these high enrollment courses also often generate higher net revenue for universities than other courses and activities. Revenue from these courses is regularly redirected to other university activities and program areas that are less "profitable" (i.e. "cost-shifting"). Consequently, initiatives that facilitate students enrolling in foundational courses at other institutions may appear to administrators as a threat to a strong and stable source of revenue. three . . . Owing to concerns about intellectual property, some university faculty and instructors have voiced concern about distributing their instructional content outside of their home institution. While open educational resources is often presented as a solution, in practice this approach continues to face opposition, and seems to have greater traction for the distribution of academic research papers (e.g. open journals) than instructional resources. four . . . Course sharing initiatives typically involve moving students between institutions (rather than the courses moving between institutions). In these cases, institutions may question whether courses offered by other institutions are inconsistent with, or inferior to, the instructional practices and academic standards of their own institution. Education is a positional good; it’s used to define the status of the student (and the faculty and institution). Not surprisingly, my review identified cases in which more prestigious institutions refused to participate in course sharing programs with less prestigious institutions. Initiatives that bring together institutions of similar status may produce higher rates of buy-in. five . . . Institutions with more robust and successful online learning operations tend to less interested in participating in consortia. As a result, the consortia will fail to benefit from the participation of institutions with the greatest capacity and interest in online education. Next Up: Common Motivations and Objectives of Consortia Dr. Keith Hampson is Managing Director, Client Innovation at Acrobatiq, a Carnegie Mellon University venture born out of CMU’s long history in cognitive science, human-computer interaction, and software engineering. @Acrobatiq
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:39pm</span>
"Worth Reading" is a hand-picked weekly collection of new, not-so-new articles and downright old ideas, events and other items for higher education professionals. 1 UMUC: The Future of Learning An animated promotional video from UMUC outlines what its vision of the future of online higher education. The vision includes, notably, competency-based assessment use of mentors as the primary contact for the student direct assessment prior learning assessment student portfolios open educational resources mixed with university licensed materials; extensive predictive analytics 2 So Bill Gates Has This Idea for a History Class … New York Times Magazine cover story (no less) on the "Big History Project". "As Gates was working his way through the series, he stumbled upon a set of DVDs titled "Big History" — an unusual college course taught by a jovial, gesticulating professor from Australia named David Christian. Unlike the previous DVDs, "Big History" did not confine itself to any particular topic, or even to a single academic discipline. Instead, it put forward a synthesis of history, biology, chemistry, astronomy and other disparate fields, which Christian wove together into nothing less than a unifying narrative of life on earth." 3 Daniels awards prize for competency-based degree to Purdue Polytechnic Institute Not insignificant news. A major university, Purdue, is interesting a competency-based program. "The national interest in competency-based education, also called direct assessment, comes on the heels of U.S Department of Education guidelines released last year for institutions wanting to provide federal student aid to enrollees in such programs. In July, the U.S. House of Representatives also passed legislation that further enables institutions offering competency-based degrees to participate in federal student aid programs." 4 The Real Value of Online Education: Why low completion rates may not matter Article in The Atlantic that encourages us to look past the low completion rates of MOOCs, and to focus on the significant volume of learning taking place. " . . . focusing on the tiny fraction of students who complete a MOOC is misguided. The more important number is the 60 percent engagement rate. Students may not finish a MOOC with a certificate of accomplishment, but the courses nonetheless meet the educational goals of millions." 5 9 MOOCs in Norwegian higher education A Norwegian government policy document that outlines the use of MOOCs for credit. Thin edge of the wedge. (Thanks to Kris Olds for pointing this out.) 6 Grade expectations: An "A" is not what it used to be Economist article that points to research detailing rising grades at elite US institution. "In 1950, Mr Rojstaczer estimates, Harvard’s average grade was a C-plus. An article from 2013 in the Harvard Crimson, a student newspaper, revealed that the median grade had soared to A-minus: the most commonly awarded grade is an A. The students may be much cleverer than before: the Ivies are no longer gentlemen’s clubs for rich knuckleheads. But most probably, their marks mean less."
Acrobatiq   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 20, 2015 02:39pm</span>
Displaying 32161 - 32184 of 43689 total records