Blogs
|
Do institutions need to obtain consent from students before collecting and using data about their online learning activities?
Should learners be allowed to opt out of having data collected about them?
Could showing students predictions about their likelihood of academic success have a negative effect on their motivation and make them more likely to drop out?
These are some of the questions addressed in a new report which is released today by Jisc: Code of practice for learning analytics: A literature review of the ethical and legal issues.
The aim of the review is to provide the groundwork for a code of practice which is intended to help institutions solve the complex issues raised by learning analytics.
It was a very interesting task gathering together publications from many different authors and organisations. I drew material from 86 publications, more than a third of them published within the last year from sources including:
The literature around learning analytics which makes explicit reference to legal and ethical issues
Articles and blogs around the ethical and legal issues of big data
A few papers which concentrate specifically on privacy
Relevant legislation, in particular the European Data Protection Directive 1995 and the UK Data Protection Act 1998
Related codes of practice from education and industry
Expressing issues as questions can be a useful way of making some of the complexities more concrete. I’ve incorporated 93 questions from the literature that authors had posed directly. The categorisations of these highlighted in the word cloud below give an instant flavour of the main concerns around the implementation of learning analytics being raised by researchers and practitioners.
At the end of the report I reviewed 16 codes of practice or lists of ethical principles from related fields and found the main concepts people wish to embody are transparency, clarity, respect, user control, consent, access and accountability.
I’ve attempted to be comprehensive but I challenge you to spot any relevant legal or ethical issues I’ve missed in the report - please let us know in the comments below if you find any. The next task is to put together an expert group to advise on the development of the code of practice … and to begin drafting the document itself.
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:07am</span>
|
|
Retweeting matters - hugely. A scan of my tweet stream shows that most tweets have already been retweeted or are themselves retweets of other tweets. It’s the same situation with Facebook. Social media is an endless recycling of other people’s thoughts or creations.
The simple fact is that most of us don’t have the time to come up with original content but if we see something interesting we want to share it with others. Our motivations for sharing are complex. We might want to enhance our professional credibility by sharing an important development in our field, and be the first to do so. This can attract followers, itself boosting work opportunities or our fragile egos if we measure our value by the number of followers, favourites and retweets we get.
Retweeting may also simply be driven by the desire for others to share our enjoyment of an amazing video, amusing cartoon or fascinating article.
RT
It didn’t take long for Twitter users to start prefixing the acronym "RT" when resending other people tweets. In the academic world this is important - implying that you wrote the tweet yourself is a form of plagiarism, and doesn’t give credit to its originator.
The problem with using "RT" was that your tweet stream would then sometimes be filled with the same tweet retweeted by multiple people. So Twitter invented the retweet function which meant that the tweet would only appear once in your feed, no matter how many people subsequently retweeted it this way. But it also meant that you had no recognition for retweeting any more. Twitter then added a further function to show how many retweets your retweet had received, which solved the problem of lack of recognition while crediting the original author of the tweet for their work.
Modifying and de-identifying
The "MT" or modified tweet is one way people are attempting to make the tweet their own but still give credit to its originator. It’s also a good way of ensuring that you subsequently get the retweets rather than the originator.
Not everyone is driven by an ethical desire to avoid plagiarism. There are people who appear to be making a career out of scanning multiple feeds to interesting content and then repackaging these into their own tweets as if they were the first to discover it - de-identifying the original author. They then get the credit for being the expert, and the ego boost from retweets of their tweet, new favourites and followers.
Twitter analytics
Being involved in a project at the moment where disseminating information about it is important I’ve been thinking a lot about these issues. Retweeting matters because it’s a sign that what we’re doing is of interest to people. It also has a monetary value attached to it in the commercial arena. There are packages such as Twitter analytics which show the reach of your tweets - though they miss all the RTs, MTs and de-identified retweets. But some analytics systems are getting more sophisticated and gather similar tweets which may show how your tweet was repackaged by others to give you a more realistic view of its impact.
Getting lost in a sea of tweets
As many Twitter users follow hundreds or thousands of other accounts it’s more than likely that when you tweet something it’ll be lost in a sea of other tweets and they’ll never see it. When others retweet your comment it is doesn’t necessarily help because if your followers are also following them they won’t see the retweet.
Some authors tweet about the same thing in a different way at different times of the day to attempt to put their work in front of you if you missed it the first time. But I can’t help feeling that that just tends to get annoying if you did see it already and I don’t personally want to be tweeting something more than once, particularly as a friend told me once, flatteringly, that his mobile phone beeps every time I send a tweet.
So RTs, MTs and simple plagiarised, non-accredited retweets of your original comment - if it includes a link to your content somewhere else - end up being great as more people are likely to view your content. If those people all themselves get retweets, reputation enhancement or new followers, ego boosts or a simple joy of sharing along the way then good luck to them.
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:07am</span>
|
|
"Facebook is mainly for older people who want to share baby photos."
If you spend any time at all with teenagers it becomes clear pretty quickly that their use of social media is a bit of a mystery to us older folk. Their devices are constantly emitting strange noises from unknown apps that demand their instant attention.
I took the opportunity while having a bracing walk through hail showers in the Highlands yesterday with a fifteen year-old to ask her about what applications she finds most useful. The uses of social media by one Scottish schoolgirl are not necessarily representative of all teenagers but they’re worth noting just because they’re so different from my own. The situation is evolving quickly and those of us who work with social media need to understand how the next generation of soon-to-be adults is using the technologies.
Facebook
She hardly ever updates her status any more: "Most of what people post is just boring. It’s mainly for older people who want to share baby photos." She’s also acutely conscious of her large number of "friends" from different backgrounds, including family, and how impossible it is to post something of relevance to all the different audiences. She and her peers have systematically gone back through their Facebook timelines and purged all the status updates which are now irrelevant or embarrassing.
Facebook still has two main uses for her though:
As a landing place for her presence on the Internet
As an instant messaging system: it’s a great way to have synchronous or longer-lasting chats with friends on a one to one basis. Sometimes the chats have several participants - and one conversation she keeps going with two friends permanently
If these are the most useful facilities of Facebook for its younger users then the platform’s future is on shaky ground. Both of these functions are already replicated by countless other systems, and could very quickly migrate away from Facebook if fashion dictates it. Meanwhile the myriad third party apps presented on the platform are it seems largely irrelevant - at least to this teenager.
I discussed the Facebook mood experiment with her to see what she thought. She’d never heard of it and, despite being bright and having a strong moral sense, saw nothing wrong with it whatsoever. This was the only really surprising thing about our conversation. To my mind, and I would assume most of my generation, messing about with people’s emotions in that way, unbeknown to them, is insidious and immoral. But this digital native, if there is such a thing, simply didn’t see the problem with it.
Snapchat
She’s frequently taking selfies or snapping photos of interesting things and sharing them with individual friends or a few at once - and these are generally real friends, not just contacts. Sometimes random people get in touch with her to ask if she wants to be friends on Snapchat. "One girl contacted me last week. I said ‘Do I know you?’ She said ‘No.’ So I said ‘Well why would I want to talk to you then?’ She didn’t respond." I pointed out that it might not actually have been a girl but, somewhat worryingly, she didn’t seem to have thought of that.
She and her friends also use Snapchat as a messaging system and frequently send "black photos" which contain a little text but no image. Not only does this convey a brief message, but it beefs up the number of pictures you’ve sent - which seems to have some kind of gaming value.
Instagram
She uses this app mainly for following companies, such as clothing manufacturers, whose products she finds interesting. She doesn’t use it that much to post pictures herself. But she does have both a private and public profile, and posts a few pictures publicly, which have quite a few "likes". Some people she says remove pictures which don’t get many likes from their profiles in order to make themselves look more popular.
Twitter
"Twitter’s just boring" and she’s largely stopped using it. Pity: it turns out that my main social media platform is for boring people.
Traditional technologies: SMS and email
While not perhaps strictly a social media technology because it’s not Internet-based, she finds that text messaging remains a highly useful way of communicating with friends. Now that many mobile phone packages come with unlimited texts, and you don’t need to be connected to wifi to use them, you can see why this old, simple technology remains so useful.
Meanwhile, though we didn’t discuss it, I’ve noticed that she’s quite comfortable using email where appropriate. She scanned in a document and emailed it to me last night, for instance. But perhaps she was just humouring me and uses email for communicating with greybeards and wrinklies. If I’d had a presence in some of the more hip places on the web, she’d have no doubt sent it to me using a cooler technology…
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:07am</span>
|
|
Many applications for learning analytics have been proposed, are under development or are already being deployed by institutions. These range from prompting staff to intervene with students at risk of drop-out to attempting to understand whether some learning content or activity is effective.
Much of this is about providing better information to staff about students and their learning. But if learning analytics is primarily about enhancing students’ learning shouldn’t we be putting analytics in the hands of the learners themselves? This was the conclusion that participants at the co-design workshop back in the summer came to.
Jisc has recently begun the procurement process to commission a range of basic learning analytics services for UK further and higher education. One of these services is the provision of a student app, taking its data primarily from a learning analytics warehouse which in turn is likely to source data from VLEs (LMSs), student information systems and elsewhere.
Jisc is hosting an event in February where we’ll bring together people from universities and colleges across the UK to look at what they think can and should be provided to students directly. The requirements gathering process will find out from students directly what analytics services they would be most interested in having at their fingertips on a smartphone or tablet. Here are some initial thoughts about what might crop up:
Measuring engagement
Students might find visualisation of their participation in a course of use, measured through a variety of metrics such as VLE access, campus attendance and library use. Comparisons with peers may be helpful. And comparisons with previous cohorts, showing the profile of a successful student might be useful too. These could be presented in a variety of ways, including graphs of engagement over time compared with others. Learners might want to have alerts sent to their device through the app if their participation shows they’re falling below an acceptable level.
Measuring assessment performance
There is clearly a need to show details of assessments already completed and grades obtained, and the dates, locations and requirements of impending ones. Assessment events transferred to your calendar with advance alerts could also be useful. But arguably this is simple reporting and alerting functionality and not learning analytics. A progress bar showing how you are progressing through your module and qualification might be helpful. Otherwise assessment data could feed into one of the metrics used for measuring engagement.
Module choice
One application of learning analytics is to assist students in making module choices. Analytics can recommend modules where you are most likely to succeed, comparing your profile with those of previous students and presenting you with information such as "Students with similar profiles to you have tended to perform better when selecting xxx as their next module".
Issues
The above proposed functionality comes with ethical questions, such as: Could an app showing you’re falling behind and likely to fail a module be de-motivational and act as a self-fulfilling prophecy? And the module choice example is of course highly dependent on the sophistication of the algorithm, and potentially restricts free choice. I’ve discussed these and many other ethical issues in a recently-published literature review which is the precursor to a Code of Practice for learning analytics which Jisc is co-developing with the sector.
Another issue is whether it makes sense from the student’s point of view to separate an analytics app from other student-facing functionality. Apps containing details of transport, campus maps, your next lecture, computer availability in the library and much else that is digitised on campus are already available to students in many institutions. Having a separate analytics app might be inconvenient. On the other hand mobile apps tend to have a limited amount of functionality compared with traditional full-scale PC applications. An app for monitoring your learning might make sense in its own right.
A student-facing app may make learning analytics more tangible and show people the possibilities of using all that data being accumulated to benefit students directly. I’ve only scratched the surface of what’s possible in the suggestions above. The event planned for February has already had a large amount of interest from the sector and we’re looking forward to gathering innovative suggestions from staff and students across the UK to be built into the Jisc app. Stay tuned to this blog for updates on progress.
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:07am</span>
|
|
A wide-ranging discussion took place in London last week to discuss the emerging Code of Practice for Learning Analytics. A new advisory group for the Code includes representatives from the National Union of Students, Edinburgh, Huddersfield, Lancaster and Loughborough Universities, Bucks New University, The Open University, Croydon College and Jisc.
A Code of Practice for Learning Analytics has been identified by Jisc stakeholders in higher and further education as a development priority. The literature review is the first stage in this activity and provides the underlying rationale. The next stage is developing the document itself.
The Code will aim to help to remove barriers to the adoption of learning analytics. We can ensure that its emphasis is positive, realistic and facilitative. It’ll provide a focus for institutions to deal with the many legal and ethical hurdles which are arising, and can be presented as an evolving, dynamic site rather than a lengthy one-off document which hardly anyone reads, let alone adheres to.
Jisc will coordinate the development and roll-out of the Code. Meanwhile advisory group members agreed to critique the Code as it’s being developed and consider piloting it at their own institutions.
Methodology and approaches
Some documents take a particular methodological or philosophical stance. For instance Slade and Prinsloo’s socio-critical approach - where learning analytics is viewed as a "transparent moral practice" and students are seen as co-contributors - has influenced the Open University’s Policy on Ethical Use of Student Data. Should the Code take such an approach?
One of the main challenges will be to strike a balance between a paternalistic approach and respecting students’ privacy and autonomy. It was suggested that the various uses for student data might have different approaches to consent:
Helping individual students based on their own data
Merging individuals’ data with those of others to help the group
Using data to help future cohorts of students
Informed consent could potentially be obtained for each of these options.
There was also concern expressed about ensuring that any sharing of student data outside the institution should be carefully controlled. The Code itself should have boundaries and may need to reference other institutional policies. There should be differentiation between demographic and behavioural data, and the "right to be forgotten" needs to be addressed.
A separate document for students?
An approach which puts the needs of learners at the heart of the Code is surely likely to result in a better and more widely-adopted document which helps to allay the fears of students and institutions and facilitate the uptake of learning analytics. The inclusion of the NUS in this group is therefore particularly welcome.
There will need to be a balance and series of compromises struck however to develop a usable Code and encourage mutual understanding. The group decided a single document setting out clearly the rights and responsibilities of students, institutions and staff would be preferable to having a separate student "bill of rights for learning analytics".
Explaining what the Code means in practice however may require separate advice for different stakeholders. At institutions the Code should link closely with the student charter, and involve buy-in from the students’ union.
Striking a balance between high level principles and detailed guidance
Can the Code be sufficiently high level to meet the needs of all institutions while remaining specific enough to provide genuinely helpful guidance? It was very clear from my institutional visits that the potential uses of learning analytics and the concerns raised varied widely across institutions. The group thought that the document should be fairly high level in order to prove useful to all, but should be backed up by case studies and examples of how institutions have dealt with particular issues. The case studies could be released alongside the code - for each principle there could be examples of good practice.
Conformance with the Code
Another question I posed to the group was whether we should encourage institutions to adopt the Code wholesale, and therefore be able to claim conformance with it, or to customise it to their own requirements? We probably need to see the end result first but it was felt that institutions might want to be able to adopt the Code with local modifications.
Human intermediation
Particular concern was expressed that the Code needs to reflect the human context and the need for intermediation of learning analytics by staff. This is a common ethical theme in the literature. However a representative from the Open University said that the sheer scale of that institution makes it unfeasible to use human intermediation for many of the potential uses of learning analytics. Meanwhile there was real concern among members that the language which is used to present analytics to students should be carefully considered and that data is only exposed when institutions have mechanisms in place to deal with the effect on students. The potential impact of analytics on the educator also needs to be reflected in the Code.
Format
All of the related codes of practice I’ve looked at are textual documents - normally provided in PDF. The members felt that a document outlining the principles needed to be provided in order to present it to institutional committees but that an interactive website containing case studies, perhaps in the form of videoed interviews with staff and students, would be welcome.
Some codes are extremely lengthy and somewhat uninspiring papers stretching to thirty pages or more. One of the better formats I’ve seen is the Respect Code of Practice for Socio-Economic Research. It’s concise - only four pages - and reasonably visually appealing, therefore arguably more likely to be read and absorbed by busy people than some of the longer codes. However, given the large number of issues identified in our literature review, four pages is unlikely to be sufficient.
One approach would be to back up a concise summary document with more detailed online guidance for each of the areas. Discussion forums could be included on each topic, enabling users to raise further issues which arise, and others to provide advice on how they’ve tackled that challenge. This would need some ongoing promotion, facilitation and moderation by Jisc and/or members of the community.
Areas to be included
The literature review covers most of the ethical and legal issues which are likely to be of concern to students and to institutions when deploying learning analytics, though there may be some which I’ve missed or have not yet cropped up in the literature. The section headings and the word clouds in the review could help prioritise the main areas to be included in the Code. It was pointed out that it would be difficult to deal with all of these meaningfully within four pages but certainly each area could be expanded on in the supporting documentation.
Including vendors
One member suggested including vendors in the consultation process for the Code. It might help them when making development decisions, for instance encouraging them to build consent systems into their products. The Code could help to ensure that safeguards, such as ensuring privacy, are built in without holding back innovation.
Development process
Jisc will develop the Code up until May 2015 with guidance from the advisory group. Supporting content e.g. videoed interviews can be developed subsequently, help raise awareness of the Code, provide examples of how it’s being implemented and help to keep it current.
A sense of ownership by institutions and by students is essential to ensure adoption. How can this best be achieved? A range of stakeholder organisations was proposed and a number of possible events to piggy-back on were suggested. Several members said they’d be keen to try piloting the Code at their institutions too. An experiential learning cycle was suggested, with institutions thinking about:
What’s the ethical/legal issue?
What’s the principle to deal with it?
How did we apply the principle?
Roll-out and dissemination
There is already considerable awareness of the intended Code of Practice but how should it best be disseminated once developed? One member suggested it would be useful to understand better the processes inside institutions for getting academic policies adopted as this will be key to uptake. In addition, a couple of events specifically around the Code could be held, papers delivered at relevant conferences and approaches made to newspapers to see if they’d like to cover its launch. It was felt that the Code should be launched with some fanfare at a larger event to increase awareness and potential take-up.
Now on with developing it… Comments are welcome.
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:06am</span>
|
|
In discussions around the ethics and legal issues of learning analytics I’ve found the same issues cropping up again and again. Almost always they’ve already been covered somewhere in the growing collection of publications on learning analytics. Sometimes they’re expressed in different ways but boil down to the same underlying problem.
The literature review of these issues aims to provide the background for the development of a code of practice for learning analytics. But it’s a large and unwieldy document to refer to so I’ve attempted to distil and group the many issues that I’ve come across so far.
I’ve given each of the resulting 86 issues a name and have provided a question which attempts to capture the issue. Many of these cannot be answered simply; almost all could be responded to with "It depends…" Most have both an ethical and a legal dimension. Some are related more to logistics than ethics or law. And some are already dealt with by existing institutional policies.
I’ll be taking this taxonomy to a workshop next week in Paris with the Lace Project and Apereo where I hope the issues and questions can be clarified and further refined. Then it’s over to the Code of Practice Advisory Group in the UK for their advice on how to translate this into a useful Code of Practice.
Area
Issue
Question
Validity
Minimisation of inaccurate data
How does an institution minimise inaccuracies in the data?
Minimisation of incomplete data
How does an institution minimise incompleteness of the dataset?
Optimum range of data sources
How many and which data sources are necessary for increasing accuracy in the analytics?
Verification of algorithms and metrics
How should an institution verify its algorithms and metrics to ensure accuracy?
Spurious correlations
How can institutions avoid drawing misleading conclusions from spurious correlations?
Evolving nature of students
To what extent can analytics be accurate when students’ identities and actions evolve as they progress through their studies?
Authentication of public data sources
How can institutions ensure that student data taken from public sites is authenticated to their students?
Ownership and control
Control of data for analytics
Who in the institution decides what data is collected and used for analytics?
Breaking silos
How can silos of data ownership in institutions be broken into in order to obtain data for analytics?
Control of analytics processes
Who in the institution decides how analytics are to be created and used?
Overall responsibility
Who is responsible in the institution for the appropriate and effective use of learning analytics?
Ownership of data
What data does the institution own and what is owned by the student?
Awareness
Student awareness of data collection
What should students be told about the data that is being collected about them?
Student awareness of data use
What should students be told about the uses to which their data is being put?
Student awareness of algorithms and metrics
To what extent should students be given details of the algorithms used for learning analytics and the metrics and labels that are created?
Proprietary algorithms and metrics
What should institutions do if vendors decline to make details of their algorithms and metrics public?
Student awareness of potential consequences of opting out
What should students be told about the potential consequences of opting out of data collection and analysis of their learning?
Staff awareness of data collection and use
What should staff be told about the data that is being collected about their students and what is being done with it?
Consent and opting out
When to seek consent
In what situations should students be asked for consent to collection and use of their data for learning analytics?
Consent for anonymous use
Should students be asked for consent for collection of data which will only be used in anonymised formats?
Consent for outsourcing
Do students need to give specific consent if the collection and analysis of data is to be outsourced to third parties?
Clear and meaningful consent processes
How can institutions avoid opaque privacy policies and ensure that students genuinely understand the consent they are asked to give?
Right to opt out
Does a student have the right to opt out of data collection and analysis of their learning activities?
Partial consent
Can students consent to some data collection and analysis but opt out elsewhere?
Right to withdraw
Does a student have the right to withdraw from data collection and analysis after previously having given their consent?
Right to anonymity
Should students be allowed to provide pseudonyms to disguise their identity in certain circumstances?
Adverse impact of opting out on individual
If a student is allowed to opt out of data collection and analysis of their activities could this have a negative effect on their studies?
Adverse impact of opting out on group
If individual students opt out will the dataset be incomplete, thus potentially reducing the accuracy and effectiveness of learning analytics for the group?
Lack of real choice to opt out
Do students really have a choice if pressure is put on them by the institution or there’s a chance of adverse impact on their academic success by opting out?
Student input to analytics process
Should students have a say in what data is collected and how it is used for analytics?
Change of purpose
Should institutions request consent again if the data is to be used for purposes for which consent was not originally given?
Legitimate interest
To what extent can the institution’s "legitimate interests" override privacy controls for individuals?
Unknown future uses of data
How can consent be requested when potential uses of the (big) data are not yet known?
Consent in open courses
Are open courses (MOOCs etc) different when it comes to obtaining consent?
Use of publicly available data
Can institutions use publicly available data (e.g. tweets) without obtaining consent?
Student access
Student access to their data
To what extent should students be able to access the data held about them?
Student access to their analytics
To what extent should students be able to access the analytics performed on their data?
Data formats
In what formats should students be able to access their data?
Metrics and labels
Should students see the metrics and labels attached to them?
Right to correct inaccurate data
What data should individuals be allowed to correct about themselves?
Data portability
What data about themselves can the learner take with them?
Privacy
Out of scope data
Is there any data that should not be used for learning analytics?
Access to employers
Under what circumstances would it be appropriate to give employers access to analytics on students?
Tracking location
Under what circumstances is it appropriate to track the location of students at campuses?
Staff permissions
To what extent should access to individuals’ data be restricted within an institution?
Unintentional creation of senstitive data
How do institutions avoid creating "sensitive" data (e.g. ethnicity, religion) from other data sources?
Use of metadata to identify individuals
Can individuals be identified from metadada even if personal data has been deleted?
Requests from external agencies
What should institutions do when requests for student data are made by external agencies e.g. educational authorities or security agencies?
Sharing data with other institutions
Is it appropriate for institutions to share student data with other institutions in order to increase the dataset and enhance the analytics?
Enhancing trust by retaining data internally
If students are told that their data will be kept within the institution will they develop greater trust in and acceptance of learning analytics?
Action
Institutional obligation to act
What obligation does the institution have to intervene when there is evidence that a student could benefit from additional support?
Student obligation to act
What is the student’s obligation to act on learning analytics designed to help them?
Conflict with study goals
What should a student do if the suggested advice is in conflict with their study goals?
Obligation to prevent continuation
Is there an obligation on the institution to prevent students from continuing on a pathway if analytics show that it is not in the student’s or institution’s interests for them to continue?
Type of intervention
How are the appropriate interventions decided on?
Distribution of interventions
How should interventions resulting from analytics be distributed among different stakeholders in the institution?
Conflicting purposes of interventions
How does the institution ensure that it is not carrying out multiple interventions whose purposes conflict?
Staff incentives for intervention
What incentives are in place for staff to intervene?
Failure to act
What happens if an institution fails to intervene?
Need for human intermediation
Are some analytics better presented to students via e.g. a tutor than via a system?
Triage
How does an institution allocate resources for learning analytics appropriately for learners with different requirements?
Triage transparency
How transparent should an institution be in how it allocates resources to different groups?
Opportunity cost
How is spending on learning analytics justified in relation to other funding requirements?
Favouring one group over another
Could the intervention strategies favour one group of students over another?
Consequences of false information
What should institutions do if it is determined that a student has given false information to e.g. obtain additional support?
Audit trails
Should institutions record audit trails of all predictions and interventions?
Unexpected findings
What infrastructure is in place to deal with something unexpected arising in the data?
Adverse impact
Labelling bias
Does a student profile or labelling bias institutional perceptions and behaviours towards them?
Oversimplification
How can institutions avoid overly simplistic metrics and decision making which ignore personal circumstances?
Undermining of autonomy
Is student autonomy in decision making about their learning undermined by predictive analytics?
Gaming the system
If students know that data is being collected about them will they alter their behaviour to present themselves more positively, thus skewing the analytics and distracting them from their learning?
Abusing the system
If students understand the algorithms behind learning analytics will they abuse the system to obtain additional support?
Adverse behavioural impact
If students are presented with data about their performance, likelihood of failure etc. could this have a negative impact on their behaviour, leading to increased likelihood of failure and dropout?
Reinforcement of discriminatory attitudes and actions
Could analytics reinforce discriminatory attitudes and actions by profiling students based on e.g. their race or gender?
Reinforcement of social power differentials
Could analytics reinforce social power differentials and learners’ status relating to each other?
Infantilisation
Could analytics "infantalise" students by spoon-feeding them with automated suggestions, making the learning process less demanding?
Echo chambers
Could analytics create "echo chambers" where intelligent software reinforces our own attitudes or beliefs?
Non-participation
Will knowledge that they are being monitored lead to non-participation by students?
Stewardship
Data minimisation
Is all the data being held on an individual necessary in order to carry out the analytics?
Data processing location
Is the data being processed in a country permitted by the local data protection laws?
Right to be forgotten
Can all data regarding an individual except that necessary for statutory purposes be deleted?
Unnecessary data retention
How long should data be retained for?
Unhelpful data deletion
If data is deleted does this restrict the institution’s ability to refine its models, track performance over multiple cohorts etc?
Incomplete knowledge of data sources
Can an institution be sure that it knows where all personal data is held?
Inappropriate data sharing
How can we prevent data being shared within or outside the institution with parties who have no legitimate interest in seeing it or may use it inappropriately?
Risk of re-identification
Does anonymisation of data becomes more difficult as multiple data sources are aggregated, potentially leading to re-identification of an individual later?
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:06am</span>
|
|
European experts came together last week in an icy Paris to review Jisc’s evolving architecture for learning analytics. The event at L’Université Paris Descartes was jointly hosted by Apereo and Jisc. Delegates included representatives from the Universiteit van Amsterdam, the Pädagogische Hochschule Weingarten in Germany, Surfnet in the Netherlands, CETIS and the Lace Project, as well as Jisc and Apereo.
The architecture walkthrough involved participants taking on the following roles for the day:
Oracle (knows everything about the architecture)
Student
Teacher
Tutor
Researcher
Security expert
Software architect
Privacy enhanced technologist
Front end developer
Federative log on expert
Enterprise service bus expert
Data governance expert
Writer
Chair
It turned out to be a very effective way of getting some in depth constructive criticism of the model, which Jisc is using to procure the components of a basic learning analytics system.
Michael Webb takes us through the Jisc learning analytics architecture
Consent service
The workshop was organised in conjunction with the Apereo Europe conference and an event the following day around ethics and legal issues. It was interesting how almost immediately the architecture session got caught up in issues relating to privacy. These were of such concern to the Germans present that they believed their students wouldn’t be prepared to use a learning analytics system unless the data was gathered anonymously. Once learners had gained confidence with the system they might be persuaded to opt in to receive better feedback. Thus the consent service was confirmed by the group as a critical part of the architecture.
Two use cases were suggested for the consent service: 1. students control which processes use their data, and 2. the institution wants to use data for a new purpose so needs to obtain consent from the students. I find myself wondering about the logistics here: what happens if the student has left the institution? Will you risk having large gaps in the data which diminish the value of the overall dataset?
One participant suggested that students could decide if they wanted analytics to be temporarily switched off - like opening an incognito window in a browser. This would allow them to have a play and do some exploration without anything being recorded. The logistics of building this into multiple systems though would certainly also be complex - and it would potentially invalidate any educational research that was being undertaken with the data.
Students may be worried about privacy, but handling the concerns of teachers was also felt to be crucial. It was suggested that statistics relating to a class should remain private to the teacher of that class; concern was expressed that learning analytics could be used to identify and subsequently fire ineffective teachers. "Could a predictive model allow unintelligent people to make decisions?" was the way the participant with the "teacher" role summed up the perennial battle for control between faculty and central administators.
One suggestion to minimise privacy infringements was to use the LinkedIn model of notifying you when someone has looked at your profile. Certainly every time someone views a student’s data it could be logged and be subsequently auditable by the student.
Jisc learning analytics architecture v2.0
Student app
One idea was for the student app to use an open API, allowing other student-facing services to be integrated with it. Another issue raised was that most analytics is carried out on data sources which can be fairly "old" however there may be a need for realtime learning analytics. And a student app which assessed whether learning outcomes had been achieved could also be very useful.
One of the most interesting ideas mooted was this: could the most important source for predictive analytics be "self-declared" data? It might be that some wearable technology monitoring your sleep patterns or your exercise levels for example could be mapped onto your learning performance. Or you might want to log the fact that you’d watched several relevant youTube videos that you’d discovered.
Learning record store
Concern was expressed around the performance of dashboards when required to process big data. Thus the ETL (extract, transform and load) layer is crucial to determine what data is stored in the learning records warehouse.
Alert and intervention system
This should not only be in place to help those at risk but should also allow the teacher to analyse how well things are going on overall in the class. Interventions might be to congratulate students on their progress as well as to address potential failure or drop-out.
Learning analytics processor
This was deemed to be so critical that it should form a separate layer, underpinning the other applications. Meanwhile compliance with the Predictive Modelling Markup Language has already been specified by Jisc as a requirement for the predictive models to be used by the learning analytics processor. But one member advised us to be wary of "gold-plated pigs" - some vendors are great at presenting beautiful apps and dashboards which may have shaky underlying models and algorithms doing the predictions. Most staff are unlikely to want to know the fine detail of how the predictions are made but they will want to be reassured that the models have been checked and verified by experts.
Standards
The use of technical, preferably open, standards is going to be important for an architecture comprising a number of interchangeable components potentially built by different vendors. The Experience API (Tin Can) has been selected as the primary format for learning records at the moment; it should be relatively easy to convert data from the LMS/VLE to this format, and some plugins e.g. Leo for Moodle already exist. However there may be a maintenance overhead every time each LMS is upgraded.
It was suggested that the IMS Learning Information Services (LIS) specification would be appropriate for storing data such as groupings of individuals in relation to courses. There is already reportedly a LIS conversion service for Moodle.
The problem of ensuring universally unique identifiers for individuals (and activities) was also noted.
Security
Our "cracker" (security expert) was concerned that security issues will be complex because of the number of different systems in place. Meanwhile there’ll be ongoing requirements for patches and version updates for the different LMSs and student information systems.
Have people given up on tablets?
Conclusions
We were reassured that the architecture, with some minor changes suggested by the group, is robust, and we’ll be aiming to put in place a basic "freemium" learning analytics solution for UK universities and colleges by September.
We hope also that this will contribute to international efforts to develop an open learning analytics architecture and we look forward to working with other organisations to develop it further in the future.
I’m grateful to all the participants who made such useful contributions on the day and in particular to Alan Berg of Apereo and the University of Amsterdam for initiating this workshop and doing the lion’s share of the organisation.
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:06am</span>
|
|
Jisc’s Learning Analytics Network got off to a great start last week with a workshop hosted by the University of East London. The event was fully subscribed, with around 50 people attending from 31 different institutions, showing the high level of interest in the area in the UK.
Staff and students very positive about dashboards at Nottingham Trent
Mike Day, Director of Information Systems at Nottingham Trent University, gave the first presentation. NTU is particulary advanced in its use of learning analytics. Mike discussed how his university already has good retention levels but wanted to use data to better inform interventions, improving attainment and a sense of belonging for students. A dashboard was built using HP Autonomy and is now in use across the institution, combining biographical information with data sources such as door swipes, library loans and VLE use. This enables comparison of engagement across a cohort and raises alerts if students appear to be disengaged.
Mike Day discusses Nottingham Trent’s pioneering work in learning analytics
Students are "strongly positive" about the dashboard, with 93% of them wanting to be warned if they’re at risk of failure. Staff are also very positive. The dashboard confirms that engagement is the strongest predictor of progression, and shows that groups with historically poorer progression and attainment do have different levels of engagement. For these groups engagement is a stronger predictor than demographics.
Analytics to improve retention at Huddersfield
Next up was Sarah Broxton, Strategic Planning Officer at Huddersfield University, who presented on Huddersfield’s work to improve retention with the use of data. Despite Huddersfield’s improving NSS scores and league table positions there has been a strategic requirement to improve retention rates and institutional effectiveness and efficiency. Meanwhile attendance monitoring and a centralised timetable system have been introduced, and there’s a need to inform staff better about the data available to them.
Mapping leaver characteristics such as age and entry qualifications to current cohorts, together with attendance data, reports of students most likely to leave early were produced, and communicated to personal tutors and other staff, encouraging them to get in touch. As with other large IT projects, Huddersfield found the technical issues relatively easy to solve - it’s changing human practices and processes that creates the challenge. However through increased transparency and training for colleagues acceptance of learning analytics is increasing.
Trying a "skunk-works" approach
Roger Greenhalgh, Jisc’s Strategic IT and Organisational Leadership Advocate, then talked about a "skunk-works" approach to analytics. Roger showed how small innovation units created within organisations, but relatively free from procedures and regulation, can develop analytics more quickly than traditional IT departments.
Engagement analytics at the University of East London
Gurdish Sandhu and Simon Gill from the University of East London, discussed their Information Strategy and Student Engagement Analytics, combining attendance monitoring data with VLE usage data and other sources to indicate students at risk of drop out. The University uses QlikView and is at the forefront of deploying useful dashboards for a range of learning analytics applications.
Jisc’s work in the area
After lunch I presented with Paul Bailey and Michael Webb on Jisc’s activities in the area, discussing the architecture for a basic learning analytics system which is currently being procured, some of the ethical and legal issues for a code of practice for learning analytics, and plans for a student app.
Paul also announced that Jisc will provide funding to the Network for three small learning analytics projects of £5k each to be run from June to September, reporting back at the end of the year. Network members will decide on which proposals should be supported.
Group work
The final session of the day involved splitting into groups to discuss some of the issues of most concern to institutions, notably:
Interventions: we need to be open and transparent about these - and it should be clear when they will happen. Accurate interpretation of data is essential. The student needs proper support in order to take any suggested actions. Meanwhile, messages to students need to be managed carefully so they don’t have a negative effect. The intervention should be captured and measured so that institutions can find out what works.
Institutional adoption: in order to develop and roll out analytics at institutions the following need to be considered:
Identify the stakeholders
Fit any analytics project work into the institution’s business planning cycle - use something like a "quality of learning and teaching forum"
Ensure that senior management sponsorship is secured and that learning analytics is prioritised against competing projects
Put mechanisms in place to interpret the analytics and define interventions
Convince academics and tutors that there’s something in it for them
Identify genuinely valid analytics - not just things we’d like to do
Identify the risks of learning analytics
A more practical suggestion was for Jisc to develop a checklist for organisations on how to implement learning analytics - including the "elevator pitch" to senior management.
There was a tangible enthusiasm among those present about the potential of learning analytics to improve the student experience, and we’ll be planning further events soon. To stay informed about future events you can subscribe to the analytics@jiscmail list.
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:05am</span>
|
|
What data and analytics should be presented directly to students? This was the subject of a workshop on 27th Feb in Jisc’s London offices, attended by staff from across UK higher education. We also had with us a couple of students with a keen interest in the area.
In advance of the meeting members of Jisc’s Student App Expert Group had anonymously provided a range of potential requirements, which I then grouped and used as the basis for the workshop (they’re included at the bottom of this post for information).
The first area is around information provision to students, and comprises functionality for:
Monitoring your academic progress
Comparing your progress to others or to your prior performance
Monitoring your engagement with learning
Providing useful information such as exam times and whether there are free computers available
The second area is concerned with action - the student actively entering information or doing something to enhance their learning. It consists of:
Prompts and suggestions
Communication facilities with staff and students
The upload of personal data
Providing consent to what data is used for learning analytics
Various other issues were suggested relating to the interface (e.g. ensuring it is easy to use), ethics (e.g. being aware that the app can only ever give a partial view of student progress), and data (e.g. accepting data from a wide range of sources).
During the day, groups discussed a number of these areas for functionality. For each we defined an idea, a purpose, benefits, drawbacks & risks, and presentational aspects. Some of these ideas are fairly wacky and might not survive further interrogation or prioritisation but here they all are for the record. The next stage is to run the ideas past students themselves to find out what they want to see in an analytics app.
How engaged am I?
The most common application of learning analytics is measuring student engagement. Putting this information in the hands of the learners themselves could help to reassure those who feel they’re on track and prompt those who aren’t engaging. There’s always the risk of course that students will game the system to achieve better engagement ratings without actually improving their learning. However it could also result in them finding the library, attending more lectures, using the VLE more or reading more books.
An idea for presenting this information was to show overall engagement on a scale of 1 to 5. Clicking the indicator would result in a further breakdown for e.g. library usage, lecture attendance and VLE usage. VLE usage might be further broken down if required, showing forum participation perhaps if that was considered important. Data could be shown by module as well as across modules.
Compare my engagement
Learners’ engagement could be compared with that of their peers or even their own past performance. Again this could be potentially motivating and inspire students to change their behaviour. The risks include being demotivating, falsely equating engagement with success, and privacy issues e.g. the identification of individuals on small cohorts from anonymous data.
How am I progressing?
The aim here is to gather and surface academic progress indicators and to identify actionable insights for the student. Timely information would aim to change their behaviour and improve achievement. Having all the information in one place would be beneficial but would there be enough information to enable students to take action? One risk is that this could "kill passion" for the subject and further divert effort into assessed or measured activities. Providing context would also be important - a grade without feedback may not be helpful. It also could be counterproductive for high performing students. Meanwhile raised and unfulfilled expectations could result in worse feedback for institutions on the National Student Survey.
Data could be presented on a sliding scale, showing whether they were likely to pass or fail and allowing them to drill down into more granular detail on their academic performance.
Compare my academic progress
This functionality would allow students to compare where they were in key activities with previous cohorts and with peers. It could aid those who lack confidence and help them to realise that they are doing better than they realised. Of course it could also damage their confidence. Another risk is that the previous cohort might be different from the current intake or the way the course is being taught might have changed.
My assessments
A possibility would be to show analytics on what successful students do and how your actions compare e.g. if students submit assessments just before the deadline are they more likely to fail? This might result in students being better prepared for their assessments.
My career aspirations
The aim here would be to help understand whether the student is on track to achieve their chosen career based on records of previous learners. This might include networking opportunities with students who have already followed a similar path. It might help to increase engagement and with module pathway planning. Students could talk about their skills and better understand how to quantify them.
Meanwhile suggestions such as "you need to know about teamwork" or "identify opportunities for voluntary work" could be provided. The app might also suggest alternative career paths or that a student is on the wrong one e.g. "your maths does not appear to be at the level required for a nuclear physicist".
Risks include that the app could be overly deterministic, restricting learner autonomy - and that students would need to ensure that their data was up to date.
Plan my career path
A related possibility is showing what educational journey a student needs to take to achieve their intended career, helping them to avoid the wrong choices for them e.g. what does the life of a midwife look like and what was their educational journey to get there?
My competencies
Another idea discussed was to enable students to monitor their competencies and reflect on their skills development, perhaps through some sort of game. This could encourage them to engage better with the materials and with their cohort. Again this wouldn’t of course guarantee success.
My emotional state
Enabling students to give an idea of their emotional state in some way would allow them to gauge how they were compared to their peer group, and to provide better feedback to the institution or to tutors. This is highly personal information of course and you might want it to be visible to you only, unless it is anonymised.
Why I didn’t attend
The app could allow students to input their reasons for non-attendance e.g. "I didn’t attend this lecture because I had my tonsils out" and "but while recovering in hospital I watched the lecture video and read the lecture notes". This might enable the adaption of engagement scores so that students felt they reflected the real situation.
Communicate
We looked at whether the app should include communications facilities around the analytics. This might between students and tutors or perhaps with peer mentors. There was concern that this might be mission creep for the app however integrating communications around the interventions taken on the basis of the analytics might be useful. The app could also provide information about opportunities for communications around student support, with personal tutors, study buddies, peer mentors or clubs.
There would be potential for communications based on the programme rather than just the module, and the functionality might for instance be used to facilitate the take-up for personal tuition. The tools available might depend on the level of the students e.g. encouraging those on a one year taught Masters. One issue raised was that there would be student expectations of a quick response, and this might result in even more email "tyranny" for academics.
Link app to my social media accounts
The idea here is to enable students to link the app to a Twitter, LinkedIn or other social media accounts so that you can send status updates from the student app. This would enable the aggregation of for example of Twitter feeds from all those on the module with Twitter accounts, allowing learners to connect better with others. The institution could use the data for sentiment mining and updates could be fed to the lecturer, even while they’re giving the lecture.
Give my consent for learning analytics
In order ensure the legal and ethical collection and use of student data for learning analytics, a key part of the learning analytics architecture Jisc is commissioning will be a consent system, which is likely to be controlled from the student app. This could be particularly important in some of the more personal applications such as linking to your social media accounts or inputting your emotional state. It will also help users to understand what is being done with their data, feel a sense of control over the process, and help to reduce concerns that data could be misused. It would allow students to control any third party access to their data e.g. by future employers.
My location
Providing geolocation data to the app could have a number of applications such as helping vulnerable students to feel safer, campus mapping and self-monitoring. It could help institutions by enabling the tracking of the use of services. Students might also be prompted to attend campus more or spend more time in the library. This does of course have privacy implications and access to location data would need to be strictly controlled (by the student). It would also generate large quantities of data.
Fun analytics
The aim here would be to motivate and engage, and to get students to use the app, by providing fun or amusing analytics. Options discussed included "calorie burner info" e.g. "you read 2 articles today and used 5 calories"; a campus induction game; weekly challenges based on activity and studies; and a badge system of rewards.
Where next?
A recommendations engine could be presented through the app, providing relevant offers, signposting and information to students. Again this could potentially result in increased engagement, driving students to helpful services. On the downside it could be intrusive, add to information overload, and be used for marketing rather than benefitting learning.
Information could be presented on what’s trending, forthcoming local events, and silly facts e.g. "30% of students who eat here get a 1st class honours!" This could help students to be better informed and prompt them to do something they might not have before.
My students union
Increased engagement with the students union can help learners to feel better connected so the app could also be used to facilitate this by showing events and information - and potentially engage them more in the democratic process.
Car park
We parked a number of ideas during the day to return to perhaps at a later stage, including: assessment regulations, tutor performance, data literacy, the naming of the app, and how we get disengaged students to use it.
Suggested functionality for the app
The following possible functions were suggested by members of the Student App Expert Group in advance of the session and then expanded on in the discussions, as summarised above. This provides a good checklist of what we might wish to consider including:
Information
Monitoring academic progress
Progress. What percentage of the course materials, activities, formative assessments etc. have you done?
Student should be able to see their progress with clear indication whether they are at risk or not
Show students their academic progress, at a granular level: what marks they have for each assignment and how that contributes to their overall progress
Ability to track own academic progress - get marks, compare own marks across modules and years
Monitor student progress (provide overall picture of student performance and alert to potential problems)
Could there be an area showing their student performance?
Real-time, or near real-time updates on progress
At a glance views of progress against criteria (such as assessment), links to personal progression tracking, and ‘useful’ traffic-light style
Overview of essay marks, including marks for research skills, writing skills, originality etc. -> ability to compare to previous essay marks
Access to formative and summative marks, and feedback
Performance data: grades
Performance data against entry profile and previous performance
An integrated view of a student’s study career, from the programme level to the course/module level
What does the rating mean?
Comparing academic progress
Academic "performance" in relation to others on cohort, possibly to previous cohorts and grade predication
Crucially, should be able to compare their data both to themselves over weeks/months/years of study, but also to the ‘average’ behaviour of the cohort with whom they study.
Answering the question: "Am I in line with my cohort, both now and preferably historically too?"
Comparison. How is your progress compared to others - in the class, best in class, last year’s class etc?
Leaderboards? Actually I hate them but my research shows that for some classes of student they do encourage engagement.
Benchmarking the student academic performance with peers
Ability to compare essay marks to average marks of cohort
Where would 1st class degree attainment be on the line - and 2nd class, 3rd class and so on?
Monitoring engagement
Look at interactions/activity they have taken part in on VLE and/or other systems, number of journals accessed online/in the library
Activity data on attendance, VLE usage and library…and if there are appropriate comparators then that
Useful information
A ‘calendar plus’ function that tells you not just what your lectures are for the day, but what other activities there are around campus - sports classes, clubs, if certain lecturers have office hours, if there are free computers in the computer lab, etc. Needs to both respond to where you are on campus, as well as make suggestions based on how much time you have to spare as well as where you are at the moment. For example, ‘You have an hour until your next lecture - why not boost your ‘library score’ and visit there for a little while, or go talk to Professor Blogs since she has office hours’.
Have information on the university’s important events and useful revision techniques
Easy and better access to learning resources
Action
Prompts and suggestions
Student should know what to do next
Provide a visual representation of the chosen metric at a granular enough level that student activity can clearly precipitate change
Potential outcomes and necessary effort - predict 2:2 do this well here and here and get a 2:2
Recommendations of training courses and resources based on essay marks
Prompts to regular self-assessment of research skills, writing skills, presentation skills etc. -> allow students to take responsibility for learning/progress
Provide students with a ‘to do’ list, showing what they have to do and by when. The difficulty here is making it all inclusive
Right now immediately after reading this text, what do you expect students to actually do
Give students access to people who can help them and identify the specific kinds of help that can be provided
Tips to improve performance, what to do next
Gives information on how to improve not just/only status
Diagnostics. The system should be able to see where I’m not doing well and point me to support materials. E.g you don’t seem to be doing well at this bit if the syllabus - or you don’t seem to be doing well at more analytic questions…
A recommendations aspect based on past use (and how others behave) - based on this module/this paper/this time of studying, we recommend that you consider this topic/this other article/this prime study time
Have information on ways they could improve their student engagement
Communication
A ‘question’ function to send concerns to the academic personal tutor or other intermediary
Identify effective communication strategies
Facilitating interactive and better Communications with academic and admin staff
Ability to communicate between staff-student, student-student
Upload of personal data
Ability to load personally captured data to provide context and information
Allow students set their own notifications - which may be alerts, reminders, or motivating messages - triggered by circumstances of their choosing. (Making good decisions about this would need facilitation, but would help towards metacognitive awareness and understanding of the data and the software themselves).
Consent
A way of opting in or out of sharing the data, or aspects of the data, with staff
Granular control of who sees what - controlled by the student
Other issues
Interface
The student app should be easy to use
Easy access to visual information
Provide a visual representation of the chosen metric at a granular enough level that student activity can clearly precipitate change
Whatever the outcome is for the learning analytics app, I’d try and keep the core interface simple. I’d personally prefer one graphic ultimately, but I’m sure there are arguments for a range of options
cross platform -brandable (the students may know their institutional brand but perhaps won’t respond to something plastered in jisc branding)
Access to the underlying data, but also good conceptually-straightforward visualisation of that data.
Analytics visualisations that will prove compelling for students to visit the app.
Cross platform /device so all can access
Ethics
Transparency about the gaps - the app should avoid over-determining - or giving the impression of over-determining - students’ progress and achievement based on data, which is inevitably an incomplete representation of learning but which may carry more weight than the ineffable or unrecorded moments of learning.
Data sources
Accept data from a range of simple or aggregated end-points - I appreciate it is likely to accept a feed from the Jisc basic LA tool, but it would be useful if we could provide a feed from the basic data we have in Blackboard ASR
Impact on teaching
Identify effective teaching and assessment practices
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:04am</span>
|
|
Jisc is currently procuring the different elements of its architecture for a basic learning analytics system which we plan to make available to UK colleges and universities later this year. In this video I explain how it all fits together.
The service will consist of the following components, and institutions will be able opt in to use some or all the components as required:
A learning analytics processor - a tool to provide predictions on student success and other analytics on learning data to feed into student intervention systems.
A staff dashboard - a presentation layer to be used by staff in institutions to view learning analytics on students. Initially this presentation layer will be focussed on the learner but dashboards for managers, librarians and IT managers could also be developed.
An alert and intervention system - a tool to provide alerts to staff and students and to allow them to manage intervention activity. The system will also be able to provide data such as methods and success, to be fed into an exemplar "cookbook" on learning analytics.
A student app - based on requirements gathering with staff and students. Integration with existing institutional apps will be supported.
A learning records warehouse - a data warehouse to hold learning records gathered from a range of institutional data sources. We will define an output interface and also support integration with a common set of institutional systems.
When will it be available?
A procurement process is underway, proposals from suppliers have now been received and we are at the selection stage to appoint suppliers to develop each of the components of the learning analytics solution. The agreements will be in place in early May.
The expectation is that a basic learning analytics system consisting of the processor, dashboard and warehouse will be in place to pilot with universities and colleges from September 2015. The other components will be developed over the next 6-12 months. A full production service will be provided if the pilots prove successful and popular from September 2017.
Niall Sclater
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 01:04am</span>
|







