Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

I’m back from yesterday’s excellent Workshop on Ethics & Privacy Issues in the Application of Learning Analytics  in Utrecht organised by LACE and SURF.  Hendrik Drachsler from the Open University of the Netherlands kicked off the session by presenting a background to learning analytics and some of the resulting ethical and privacy issues.  He mentioned the situation in the Netherlands where universities are now partially funded on the basis of how many students they graduate, and concerns that that gives them an incentive not to accept students who are predicted to fail. He also discussed the InBloom debacle in the US - "a perfect example of not taking care about privacy issues".  There was another situation in the Netherlands where an app used on tablets in schools collected data on which further analysis was carried out.  Problems arose because this analysis wasn’t described in the terms and conditions of use. Hendrik mentioned that his call for ethical and privacy issues in the application of learning analytics had produced over 100 issues.  These were then put into four categories: privacy, ethics, data and transparency.  The aim of the day was to discuss these issues and begin to look for solutions to them. The group decided that there are often no clear boundaries between these categories.  Certainly I’ve found it artificial to try to split legal issues from ethical ones when carrying out my recent literature review of the area.  Much of the law is based on ethics - and sometimes an ethical stance has to be applied when interpreting the law in particular situations. The workshop wasn’t purely full of Hendriks, but a second Hendrik, Hendrik vom Lehn then gave an informative presentation on practical considerations around some of the legal issues arising from learning analytics.  Much of what he said, and subsequent discussions during the day, related to the EU Data Protection Directive.  Hendrik thought that a common misconception about the Directive is that the scope of "personal data" is much broader than most people think, and includes absolutely everything which makes data personally identifiable. Another interesting concept in the Directive is that data can potentially be processed without consent from individuals if it is in the "legitimate interest" of the organisation to do so.  However in practice it’s likely to be better to inform students and obtain their consent for data collection and the resulting analytics.  Hendrik also discussed the US concept of "reasonable expectation": people whose data is being processed should have reasonable expectations of what is being done with it.  Thus if you start using it in new ways (e.g. the recent Facebook mood altering experiment) you’re on dangerous ground. Anonymisation is often proposed as an alternative to obtaining consent, but this can be difficult to achieve.  It’s particularly problematic in small groups where behaviours can easily be attributed to an individual. Hendrik felt that where grading is based on learning analytics or can in some way affect the career of the student, this could have legal implications.  Another issue he mentioned, which I hadn’t come across before, was the subordinate position of students, and that they might feel obliged to participate in data collection or learning analytics activities because they were being graded by the same person (or institution) that was analysing them.  Would any consent given be regarded as truly voluntary in that case? A member of the audience then asked if there was a difference between research and practice in learning analytics.  Hendrik suggested that ethically our approach should be the same but from a legal perspective there may be a difference. So what happens if a student asks for all the data that an institution has about them?  Hendrik thought that the Directive implied that we do indeed need to make everything we know about students available to them.  However there might be a possible conflict between full data access and the wider goals of learning analytics - it might make it easier for students to cheat, for example.  Also it may be difficult to provide meaningful access for an individual while excluding other students’ data. Another potentially difficult area is outsourcing and data transfers to third parties.  This is particularly problematic of course when that data is being transferred outside the European Union.  For students the process of understanding what is happening to their data - and accessing it - can then become more complex and they may have to go through several steps.  Ownership of the data is not complete in this situation for any party (though in a later discussion it was proposed that "ownership" is not a helpful concept here - more relevant are the EU concepts of "data controller" and "data processor"). We then split into groups and had the benefit of some great input from Jan-Jan Lowijs -  a privacy consultant from Deloitte.  He described the nine general themes in the Directive which we found a useful way to propose answers to some of the 100 issues that had been submitted.  These are: Legitimate grounds - why you should have the data in the first place Purpose of the data - what you want to do with it Data quality - minimisation, deletion etc Transparency - informing the students Inventory - knowing what data you have and what you do with it already Access - the right of the data subject to access their data, when can you have access to it and what can you see Outsourcing - and the responsibilities of your institution as data controller and the third party as data processor Transport of data - particularly problematic if outside the EU Data security Attempting to answer some of questions submitted using the themes as guidance resulted in the following: Who has access to data about students’ activities? Students themselves and certified access for teachers, researchers etc, based on theme 2 above (purpose of data) What data should students be able to view? All data on an individual should be provided at any time they request it - that’s the situation to aim for, based on theme 6 (access) Should students have the right to request that their digital dossiers be deleted on graduation? Yes, so long as there are no other obligations on the institution to keep the data e.g. names, date of birth, final grades, based on theme 3 (data quality) What are the implications of institutions collecting data from non-institutional sources (e.g. Twitter)? Consent must be obtained from the students first, based on theme 4 (transparency). A case in Finland where two students sued their university who were re-using their Twitter data was noted. Something interesting that Jan-Jan also mentioned was that there are differences in data subjects’ attitudes to privacy, and that a number of studies have shown a fairly consistent split of: 25% "privacy fundamentalists" who don’t want to share their data 60% pragmatists who are happy to share some of their data for particular purposes 15% people who "don’t care" what happens to their data An organisation needs therefore to make an active decision as to whether it attempts to cater for these different attitudes or finds some middle ground. Some of the conclusions from the day in the final session were: It was noted that students were absent from the discussions and should be involved in the future. It was suggested that we fully articulate the risks for institutions of learning analytics. What are the showstoppers?  Are they reputational or based on a fear of loss of students? "Privacy by design" and user-centred design with much better management of their data by users themselves were thought to be vital. InBloom was suggested as an "anti-pattern", to be studied further to establish what we shouldn’t be doing. If you think something’s dodgy then it probably is. I have to admit being slightly concerned to hear that one university has equipment in its toilets to ensure that you’re not using your mobile phone to cheat if you have to nip out during an exam.  A good rule of thumb proposed by Jan-Jan is that if you feel uneasy about some form of data collection or analysis then there’s probably something to be worried about. The outcomes from the day were being much more rigorously written up than I have done above by a professional writer - and will be fed into subsequent workshops held by LACE with the aim of producing a whitepaper or longer publication in the area.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:09am</span>
Jisc today released a new report: Learning Analytics: the current state of play in UK higher and further education.  It was written after a series of visits I made recently to universities and colleges across the UK which were known to be carrying out interesting work in learning analytics. I was inspired by campuses filled with enthusiastic freshers out enjoying the late summer sunshine and no doubt largely unaware of the technological innovations underway aimed at enhancing their chances of academic success.  Indoors I had fascinating discussions with some of the staff who are pioneering the use of the new technologies in the UK. Various things struck me as I carried out structured interviews at each institution.  They varied tremendously in their organisational structures and approaches to education, and hence in their motivations for using learning analytics.  Increasing retention for example was vital for some of them while others, who didn’t have huge problems with student drop-out, were more interested in improving the learning experience, the tutor-student relationship or the institution’s scores in the National Student Survey. It was also interesting to discover just what an early stage the UK is at in its use of learning analytics.  The activities discussed ranged from general business intelligence and the monitoring of key performance indicators, to tools which predict students at risk of failure, to systems which help manage the workflow around staff-student interactions.   The distinction between academic analytics and learning analytics which some commentators have attempted to make didn’t really seem to apply - most people see the data as part of a continuum which can be used by people at all levels of the institution. The approach to gathering data for learning analytics varies widely across the institutions too.  Student information systems and virtual learning environments provide the bulk of the data.  But one of the most surprising findings is that there is little common ground among the participating institutions in the analytics systems they are using.  This seems to confirm the nascent state of the technologies and the lack of consolidation in the marketplace.  Contrast this with the market for VLEs where Blackboard and Moodle dominate. Most interviewees were reluctant to claim any significant outcomes from their learning analytics activities to date.  Several mentioned the strong correlation they have found between physical attendance and achievement.  Others found that a significant outcome of the analytics work has been improved connections between disparate parts of their organisations. When asked about future plans, most institutions were planning to consolidate the use of the systems they had recently put in place but would also be integrating new data sources and improving the presentation of dashboards. There was a strong desire as well to put tools in the hands of students so they can better monitor their own behaviours and performance.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:08am</span>
As MOOCs began to proliferate it was clear that services would emerge which would make it easier to find those of interest to potential learners. MOOC providers are now making data about their courses available through RSS feeds and APIs so it’s possible to harvest those and develop tools to allow people to find and review courses more easily. But what if you want to build in MOOC study as part of your professional development? You might need your manager to allow you to take time off or your company to pay for the certification. Last week I met with two brothers from South Africa, now based in London, who are developing a system to enable just that: Michael and Greg Howe from GroupMOOC. I was acting as a mentor for the Open Education Challenge - a European incubation programme offering mentoring, coaching and investment to some of the most innovative education start-ups worldwide. I’ve found that some of the most inspiring developments in educational technology come from small, young companies. The Start-up Alley at Educause for example is for me the most fascinating aspect of that conference. Back in London at the Open Education Challenge I was fortunate to meet with a range of enthusiastic entrepreneurs who’d given up secure jobs to pursue their business ideas. GroupMOOC enables you to search for MOOCs you’re interested in, read reviews and plan your workload, exporting key events and deadlines to your calendar - particularly useful if you’re studying more than one at once. You can also create groups of friends and colleagues to share your experiences with. The product helps HR directors to develop an overview of what MOOCs their staff are doing and builds in workflow enabling managers to authorise time off (the default setting is "100% in your own time!") or agree to pay for the final certificate. GroupMOOC is well-designed and useful. Whatever MOOCs evolve into in the future, there will certainly remain a massive and growing market for "massive" and not-so-massive online education - and a need for tools which help organise the complexity of thousands of courses from multiple providers. Greg and Michael’s next challenge is to convince a panel of investors to provide funding to further expand the product’s functionality. They’ve made a great start.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:08am</span>
The latest Lace Project event was held in the illustrious surroundings of the Allard Pierson Museum at Amsterdam University this week.  The focus this time was on open learning analytics.  After some lightening presentations on participants’ interests in the area, we split into groups to look at what exactly open meant in the context of learning analytics. We discussed some of the issues with Adam Cooper of Cetis, shown here alongside other fine specimens of humanity. Probably the most obvious aspect of open in the context of learning analytics is the reuse of code (and documentation) that others have created.  Predictive models can also of course be opened up to others.  Having open APIs in learning analytics products is important too - as is interoperability of the different systems.  And a vibrant community of people and institutions who wish to share and enhance all these things is essential. Openness was thought to "keep the market honest" and it also improves transparency for learners and other users of learning analytics systems.  Openness may also mean that the data from one environment can be connected to that in another one, not only across the different systems in one institution but potentially with other institutions too.  Adam has documented some of the organisational intitiatives to share data for learning analytics . In a group discussion later we looked at some of the next steps or "big ideas" for open learning analytics: Clarifying the technical architectures - Apereo has defined an architecture and Jisc is commissioning the components of a basic learning analytics system Developing a privacy-enhanced technology infrastructure Student-facing tools to monitor and analyse their own learning progress Tools to evaluate accessibility issues - an example was given of a system which determines if users are dyslexic and then adapts the learning accordingly The other groups reported back on their proposed essential next steps: Understanding the organisational consequences (or "systemic impact") of deploying learning analytics Gathering evidence for relevant specifications and standards that work or don’t work Obtaining consent from students to help evolve learning analytics, instead of just agreeing that their data can be used (see Sharon Slade and Paul Prinsloo’s suggestions on students being collaborators in the learning analytics process rather than mere data subjects) Building reference implementations of learning record stores Understanding the barriers to the deployment of learning analytics One of the clearest messages to come out of the day was just how important tackling the privacy issues is becoming in order to avoid an InBloom type fiasco  in Europe.  While this is a problem everywhere, concerns about the lack of systems for granting consent to the collection of data are massively holding up implementations of learning analytics in countries such as the Netherlands and Germany.  A further event being planned for Paris in February will attempt to progress understanding in this area and develop transparent learning analytics systems which include mechanisms to obtain consent at granular levels from students to process their data.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:08am</span>
Do institutions need to obtain consent from students before collecting and using data about their online learning activities? Should learners be allowed to opt out of having data collected about them? Could showing students predictions about their likelihood of academic success have a negative effect on their motivation and make them more likely to drop out? These are some of the questions addressed in a new report which is released today by Jisc: Code of practice for learning analytics: A literature review of the ethical and legal issues. The aim of the review is to provide the groundwork for a code of practice which is intended to help institutions solve the complex issues raised by learning analytics. It was a very interesting task gathering together publications from many different authors and organisations.  I drew material from 86 publications, more than a third of them published within the last year from sources including: The literature around learning analytics which makes explicit reference to legal and ethical issues Articles and blogs around the ethical and legal issues of big data A few papers which concentrate specifically on privacy Relevant legislation, in particular the European Data Protection Directive 1995 and the UK Data Protection Act 1998 Related codes of practice from education and industry Expressing issues as questions can be a useful way of making some of the complexities more concrete. I’ve incorporated 93 questions from the literature that authors had posed directly. The categorisations of these highlighted in the word cloud below give an instant flavour of the main concerns around the implementation of learning analytics being raised by researchers and practitioners.   At the end of the report I reviewed 16 codes of practice or lists of ethical principles from related fields and found the main concepts people wish to embody are transparency, clarity, respect, user control, consent, access and accountability. I’ve attempted to be comprehensive but I challenge you to spot any relevant legal or ethical issues I’ve missed in the report - please let us know in the comments below if you find any.  The next task is to put together an expert group to advise on the development of the code of practice … and to begin drafting the document itself.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:07am</span>
  Retweeting matters - hugely.  A scan of my tweet stream shows that most tweets have already been retweeted or are themselves retweets of other tweets.  It’s the same situation with Facebook.  Social media is an endless recycling of other people’s thoughts or creations. The simple fact is that most of us don’t have the time to come up with original content but if we see something interesting we want to share it with others.  Our motivations for sharing are complex.  We might want to enhance our professional credibility by sharing an important development in our field, and be the first to do so.  This can attract followers, itself boosting work opportunities or our fragile egos if we measure our value by the number of followers, favourites and retweets we get. Retweeting may also simply be driven by the desire for others to share our enjoyment of an amazing video, amusing cartoon or fascinating article. RT It didn’t take long for Twitter users to start prefixing the acronym "RT" when resending other people tweets.  In the academic world this is important - implying that you wrote the tweet yourself is a form of plagiarism, and doesn’t give credit to its originator. The problem with using "RT" was that your tweet stream would then sometimes be filled with the same tweet retweeted by multiple people.  So Twitter invented the retweet function which meant that the tweet would only appear once in your feed, no matter how many people subsequently retweeted it this way.  But it also meant that you had no recognition for retweeting any more.  Twitter then added a further function to show how many retweets your retweet had received, which solved the problem of lack of recognition while crediting the original author of the tweet for their work. Modifying and de-identifying The "MT" or modified tweet is one way people are attempting to make the tweet their own but still give credit to its originator.  It’s also a good way of ensuring that you subsequently get the retweets rather than the originator. Not everyone is driven by an ethical desire to avoid plagiarism.  There are people who appear to be making a career out of scanning multiple feeds to interesting content and then repackaging these into their own tweets as if they were the first to discover it - de-identifying the original author.  They then get the credit for being the expert, and the ego boost from retweets of their tweet, new favourites and followers. Twitter analytics Being involved in a project at the moment where disseminating information about it is important I’ve been thinking a lot about these issues.  Retweeting matters because it’s a sign that what we’re doing is of interest to people. It also has a monetary value attached to it in the commercial arena.  There are packages such as Twitter analytics which show the reach of your tweets - though they miss all the RTs, MTs and de-identified retweets.  But some analytics systems are getting more sophisticated and gather similar tweets which may show how your tweet was repackaged by others to give you a more realistic view of its impact.   Getting lost in a sea of tweets As many Twitter users follow hundreds or thousands of other accounts it’s more than likely that when you tweet something it’ll be lost in a sea of other tweets and they’ll never see it.  When others retweet your comment it is doesn’t necessarily help because if your followers are also following them they won’t see the retweet. Some authors tweet about the same thing in a different way at different times of the day to attempt to put their work in front of you if you missed it the first time.  But I can’t help feeling that that just tends to get annoying if you did see it already and I don’t personally want to be tweeting something more than once, particularly as a friend told me once, flatteringly, that his mobile phone beeps every time I send a tweet. So RTs, MTs and simple plagiarised, non-accredited retweets of your original comment - if it includes a link to your content somewhere else - end up being great as more people are likely to view your content.  If those people all themselves get retweets, reputation enhancement or new followers, ego boosts or a simple joy of sharing along the way then good luck to them.  
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:07am</span>
"Facebook is mainly for older people who want to share baby photos." If you spend any time at all with teenagers it becomes clear pretty quickly that their use of social media is a bit of a mystery to us older folk.  Their devices are constantly emitting strange noises from unknown apps that demand their instant attention. I took the opportunity while having a bracing walk through hail showers in the Highlands yesterday with a fifteen year-old to ask her about what applications she finds most useful.  The uses of social media by one Scottish schoolgirl are not necessarily representative of all teenagers but they’re worth noting just because they’re so different from my own.  The situation is evolving quickly and those of us who work with social media need to understand how the next generation of soon-to-be adults is using the technologies. Facebook She hardly ever updates her status any more:  "Most of what people post is just boring.  It’s mainly for older people who want to share baby photos."  She’s also acutely conscious of her large number of "friends" from different backgrounds, including family, and how impossible it is to post something of relevance to all the different audiences.  She and her peers have systematically gone back through their Facebook timelines and purged all the status updates which are now irrelevant or embarrassing. Facebook still has two main uses for her though: As a landing place for her presence on the Internet As an instant messaging system: it’s a great way to have synchronous or longer-lasting chats with friends on a one to one basis.  Sometimes the chats have several participants - and one conversation she keeps going with two friends permanently If these are the most useful facilities of Facebook for its younger users then the platform’s future is on shaky ground.  Both of these functions are already replicated by countless other systems, and could very quickly migrate away from Facebook if fashion dictates it.  Meanwhile the myriad third party apps presented on the platform are it seems largely irrelevant - at least to this teenager. I discussed the Facebook mood experiment with her to see what she thought.  She’d never heard of it and, despite being bright and having a strong moral sense, saw nothing wrong with it whatsoever.  This was the only really surprising thing about our conversation.  To my mind, and I would assume most of my generation, messing about with people’s emotions in that way, unbeknown to them, is insidious and immoral.  But this digital native, if there is such a thing, simply didn’t see the problem with it. Snapchat She’s frequently taking selfies or snapping photos of interesting things and sharing them with individual friends or a few at once - and these are generally real friends, not just contacts.  Sometimes random people get in touch with her to ask if she wants to be friends on Snapchat.  "One girl contacted me last week.  I said ‘Do I know you?’  She said ‘No.’  So I said ‘Well why would I want to talk to you then?’ She didn’t respond."  I pointed out that it might not actually have been a girl but, somewhat worryingly, she didn’t seem to have thought of that. She and her friends also use Snapchat as a messaging system and frequently send "black photos" which contain a little text but no image.  Not only does this convey a brief message, but it beefs up the number of pictures you’ve sent - which seems to have some kind of gaming value. Instagram She uses this app mainly for following companies, such as clothing manufacturers, whose products she finds interesting.  She doesn’t use it that much to post pictures herself.  But she does have both a private and public profile, and posts a few pictures publicly, which have quite a few "likes".  Some people she says remove pictures which don’t get many likes from their profiles in order to make themselves look more popular. Twitter "Twitter’s just boring" and she’s largely stopped using it.  Pity: it turns out that my main social media platform is for boring people. Traditional technologies: SMS and email While not perhaps strictly a social media technology because it’s not Internet-based, she finds that text messaging remains a highly useful way of communicating with friends.  Now that many mobile phone packages come with unlimited texts, and you don’t need to be connected to wifi to use them, you can see why this old, simple technology remains so useful. Meanwhile, though we didn’t discuss it, I’ve noticed that she’s quite comfortable using email where appropriate.  She scanned in a document and emailed it to me last night, for instance.  But perhaps she was just humouring me and uses email for communicating with greybeards and wrinklies.  If I’d had a presence in some of the more hip places on the web, she’d have no doubt sent it to me using a cooler technology…
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:07am</span>
Many applications for learning analytics have been proposed, are under development or are already being deployed by institutions.  These range from prompting staff to intervene with students at risk of drop-out to attempting to understand whether some learning content or activity is effective. Much of this is about providing better information to staff about students and their learning.  But if learning analytics is primarily about enhancing students’ learning shouldn’t we be putting analytics in the hands of the learners themselves?  This was the conclusion that participants at the co-design workshop back in the summer came to. Jisc has recently begun the procurement process to commission a range of basic learning analytics services for UK further and higher education.  One of these services is the provision of a student app, taking its data primarily from a learning analytics warehouse which in turn is likely to source data from VLEs (LMSs), student information systems and elsewhere. Jisc is hosting an event in February where we’ll bring together people from universities and colleges across the UK to look at what they think can and should be provided to students directly.  The requirements gathering process will find out from students directly what analytics services they would be most interested in having at their fingertips on a smartphone or tablet.  Here are some initial thoughts about what might crop up: Measuring engagement Students might find visualisation of their participation in a course of use, measured through a variety of metrics such as VLE access, campus attendance and library use.  Comparisons with peers may be helpful.  And comparisons with previous cohorts, showing the profile of a successful student might be useful too.   These could be presented in a variety of ways, including graphs of engagement over time compared with others.  Learners might want to have alerts sent to their device through the app if their participation shows they’re falling below an acceptable level. Measuring assessment performance There is clearly a need to show details of assessments already completed and grades obtained, and the dates, locations and requirements of impending ones.  Assessment events transferred to your calendar with advance alerts could also be useful.  But arguably this is simple reporting and alerting functionality and not learning analytics.  A progress bar showing how you are progressing through your module and qualification might be helpful.  Otherwise assessment data could feed into one of the metrics used for measuring engagement. Module choice One application of learning analytics is to assist students in making module choices.  Analytics can recommend modules where you are most likely to succeed, comparing your profile with those of previous students and presenting you with information such as "Students with similar profiles to you have tended to perform better when selecting xxx as their next module". Issues The above proposed functionality comes with ethical questions, such as: Could an app showing you’re falling behind and likely to fail a module be de-motivational and act as a self-fulfilling prophecy?  And the module choice example is of course highly dependent on the sophistication of the algorithm, and potentially restricts free choice.  I’ve discussed these and many other ethical issues in a recently-published literature review which is the precursor to a Code of Practice for learning analytics which Jisc is co-developing with the sector. Another issue is whether it makes sense from the student’s point of view to separate an analytics app from other student-facing functionality.  Apps containing details of transport, campus maps, your next lecture, computer availability in the library and much else that is digitised on campus are already available to students in many institutions.  Having a separate analytics app might be inconvenient.  On the other hand mobile apps tend to have a limited amount of functionality compared with traditional full-scale PC applications.  An app for monitoring your learning might make sense in its own right. A student-facing app may make learning analytics more tangible and show people the possibilities of using all that data being accumulated to benefit students directly.  I’ve only scratched the surface of what’s possible in the suggestions above.  The event planned for February has already had a large amount of interest from the sector and we’re looking forward to gathering innovative suggestions from staff and students across the UK to be built into the Jisc app.  Stay tuned to this blog for updates on progress.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:07am</span>
A wide-ranging discussion took place in London last week to discuss the emerging Code of Practice for Learning Analytics.  A new advisory group for the Code includes representatives from the National Union of Students, Edinburgh, Huddersfield, Lancaster and Loughborough Universities, Bucks New University, The Open University, Croydon College and Jisc. A Code of Practice for Learning Analytics has been identified by Jisc stakeholders in higher and further education as a development priority.  The literature review is the first stage in this activity and provides the underlying rationale.  The next stage is developing the document itself. The Code will aim to help to remove barriers to the adoption of learning analytics.  We can ensure that its emphasis is positive, realistic and facilitative.  It’ll provide a focus for institutions to deal with the many legal and ethical hurdles which are arising, and can be presented as an evolving, dynamic site rather than a lengthy one-off document which hardly anyone reads, let alone adheres to. Jisc will coordinate the development and roll-out of the Code.  Meanwhile advisory group members agreed to critique the Code as it’s being developed and consider piloting it at their own institutions. Methodology and approaches Some documents take a particular methodological or philosophical stance.  For instance Slade and Prinsloo’s socio-critical approach - where learning analytics is viewed as a "transparent moral practice" and students are seen as co-contributors - has influenced the Open University’s Policy on Ethical Use of Student Data.  Should the Code take such an approach? One of the main challenges will be to strike a balance between a paternalistic approach and respecting students’ privacy and autonomy.  It was suggested that the various uses for student data might have different approaches to consent: Helping individual students based on their own data Merging individuals’ data with those of others to help the group Using data to help future cohorts of students Informed consent could potentially be obtained for each of these options. There was also concern expressed about ensuring that any sharing of student data outside the institution should be carefully controlled.  The Code itself should have boundaries and may need to reference other institutional policies.  There should be differentiation between demographic and behavioural data, and the "right to be forgotten" needs to be addressed. A separate document for students? An approach which puts the needs of learners at the heart of the Code is surely likely to result in a better and more widely-adopted document which helps to allay the fears of students and institutions and facilitate the uptake of learning analytics.  The inclusion of the NUS in this group is therefore particularly welcome. There will need to be a balance and series of compromises struck however to develop a usable Code and encourage mutual understanding. The group decided a single document setting out clearly the rights and responsibilities of students, institutions and staff would be preferable to having a separate student "bill of rights for learning analytics". Explaining what the Code means in practice however may require separate advice for different stakeholders.  At institutions the Code should link closely with the student charter, and involve buy-in from the students’ union. Striking a balance between high level principles and detailed guidance Can the Code be sufficiently high level to meet the needs of all institutions while remaining specific enough to provide genuinely helpful guidance?  It was very clear from my institutional visits that the potential uses of learning analytics and the concerns raised varied widely across institutions.  The group thought that the document should be fairly high level in order to prove useful to all, but should be backed up by case studies and examples of how institutions have dealt with particular issues.  The case studies could be released alongside the code - for each principle there could be examples of good practice. Conformance with the Code Another question I posed to the group was whether we should encourage institutions to adopt the Code wholesale, and therefore be able to claim conformance with it, or to customise it to their own requirements?  We probably need to see the end result first but it was felt that institutions might want to be able to adopt the Code with local modifications. Human intermediation Particular concern was expressed that the Code needs to reflect the human context and the need for intermediation of learning analytics by staff. This is a common ethical theme in the literature.  However a representative from the Open University said that the sheer scale of that institution makes it unfeasible to use human intermediation for many of the potential uses of learning analytics.  Meanwhile there was real concern among members that the language which is used to present analytics to students should be carefully considered and that data is only exposed when institutions have mechanisms in place to deal with the effect on students.  The potential impact of analytics on the educator also needs to be reflected in the Code. Format All of the related codes of practice I’ve looked at are textual documents - normally provided in PDF.  The members felt that a document outlining the principles needed to be provided in order to present it to institutional committees but that an interactive website containing case studies, perhaps in the form of videoed interviews with staff and students, would be welcome. Some codes are extremely lengthy and somewhat uninspiring papers stretching to thirty pages or more. One of the better formats I’ve seen is the Respect Code of Practice for Socio-Economic Research.  It’s concise - only four pages - and reasonably visually appealing, therefore arguably more likely to be read and absorbed by busy people than some of the longer codes.  However, given the large number of issues identified in our literature review, four pages is unlikely to be sufficient. One approach would be to back up a concise summary document with more detailed online guidance for each of the areas.  Discussion forums could be included on each topic, enabling users to raise further issues which arise, and others to provide advice on how they’ve tackled that challenge.  This would need some ongoing promotion, facilitation and moderation by Jisc and/or members of the community. Areas to be included The literature review covers most of the ethical and legal issues which are likely to be of concern to students and to institutions when deploying learning analytics, though there may be some which I’ve missed or have not yet cropped up in the literature.  The section headings and the word clouds in the review could help prioritise the main areas to be included in the Code.  It was pointed out that it would be difficult to deal with all of these meaningfully within four pages but certainly each area could be expanded on in the supporting documentation. Including vendors One member suggested including vendors in the consultation process for the Code.  It might help them when making development decisions, for instance encouraging them to build consent systems into their products.  The Code could help to ensure that safeguards, such as ensuring privacy, are built in without holding back innovation. Development process Jisc will develop the Code up until May 2015 with guidance from the advisory group.  Supporting content e.g. videoed interviews can be developed subsequently, help raise awareness of the Code, provide examples of how it’s being implemented and help to keep it current. A sense of ownership by institutions and by students is essential to ensure adoption.  How can this best be achieved?  A range of stakeholder organisations was proposed and a number of possible events to piggy-back on were suggested.  Several members said they’d be keen to try piloting the Code at their institutions too.  An experiential learning cycle was suggested, with institutions thinking about: What’s the ethical/legal issue? What’s the principle to deal with it? How did we apply the principle? Roll-out and dissemination There is already considerable awareness of the intended Code of Practice but how should it best be disseminated once developed?  One member suggested it would be useful to understand better the processes inside institutions for getting academic policies adopted as this will be key to uptake.  In addition, a couple of events specifically around the Code could be held, papers delivered at relevant conferences and approaches made to newspapers to see if they’d like to cover its launch.  It was felt that the Code should be launched with some fanfare at a larger event to increase awareness and potential take-up. Now on with developing it…  Comments are welcome.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:06am</span>
In discussions around the ethics and legal issues of learning analytics I’ve found the same issues cropping up again and again. Almost always they’ve already been covered somewhere in the growing collection of publications on learning analytics. Sometimes they’re expressed in different ways but boil down to the same underlying problem. The literature review of these issues aims to provide the background for the development of a code of practice for learning analytics. But it’s a large and unwieldy document to refer to so I’ve attempted to distil and group the many issues that I’ve come across so far. I’ve given each of the resulting 86 issues a name and have provided a question which attempts to capture the issue. Many of these cannot be answered simply; almost all could be responded to with "It depends…" Most have both an ethical and a legal dimension. Some are related more to logistics than ethics or law. And some are already dealt with by existing institutional policies. I’ll be taking this taxonomy to a workshop next week in Paris with the Lace Project and Apereo where I hope the issues and questions can be clarified and further refined. Then it’s over to the Code of Practice Advisory Group in the UK for their advice on how to translate this into a useful Code of Practice. Area Issue Question Validity Minimisation of inaccurate data How does an institution minimise inaccuracies in the data? Minimisation of incomplete data How does an institution minimise incompleteness of the dataset? Optimum range of data sources How many and which data sources are necessary for increasing accuracy in the analytics? Verification of algorithms and metrics How should an institution verify its algorithms and metrics to ensure accuracy? Spurious correlations How can institutions avoid drawing misleading conclusions from spurious correlations? Evolving nature of students To what extent can analytics be accurate when students’ identities and actions evolve as they progress through their studies? Authentication of public data sources How can institutions ensure that student data taken from public sites is authenticated to their students? Ownership and control Control of data for analytics Who in the institution decides what data is collected and used for analytics? Breaking silos How can silos of data ownership in institutions be broken into in order to obtain data for analytics? Control of analytics processes Who in the institution decides how analytics are to be created and used? Overall responsibility Who is responsible in the institution for the appropriate and effective use of learning analytics? Ownership of data What data does the institution own and what is owned by the student? Awareness Student awareness of data collection What should students be told about the data that is being collected about them? Student awareness of data use What should students be told about the uses to which their data is being put? Student awareness of algorithms and metrics To what extent should students be given details of the algorithms used for learning analytics and the metrics and labels that are created? Proprietary algorithms and metrics What should institutions do if vendors decline to make details of their algorithms and metrics public? Student awareness of potential consequences of opting out What should students be told about the potential consequences of opting out of data collection and analysis of their learning? Staff awareness of data collection and use What should staff be told about the data that is being collected about their students and what is being done with it? Consent and opting out When to seek consent In what situations should students be asked for consent to collection and use of their data for learning analytics? Consent for anonymous use Should students be asked for consent for collection of data which will only be used in anonymised formats? Consent for outsourcing Do students need to give specific consent if the collection and analysis of data is to be outsourced to third parties? Clear and meaningful consent processes How can institutions avoid opaque privacy policies and ensure that students genuinely understand the consent they are asked to give? Right to opt out Does a student have the right to opt out of data collection and analysis of their learning activities? Partial consent Can students consent to some data collection and analysis but opt out elsewhere? Right to withdraw Does a student have the right to withdraw from data collection and analysis after previously having given their consent? Right to anonymity Should students be allowed to provide pseudonyms to disguise their identity in certain circumstances? Adverse impact of opting out on individual If a student is allowed to opt out of data collection and analysis of their activities could this have a negative effect on their studies? Adverse impact of opting out on group If individual students opt out will the dataset be incomplete, thus potentially reducing the accuracy and effectiveness of learning analytics for the group? Lack of real choice to opt out Do students really have a choice if pressure is put on them by the institution or there’s a chance of adverse impact on their academic success by opting out? Student input to analytics process Should students have a say in what data is collected and how it is used for analytics? Change of purpose Should institutions request consent again if the data is to be used for purposes for which consent was not originally given? Legitimate interest To what extent can the institution’s "legitimate interests" override privacy controls for individuals? Unknown future uses of data How can consent be requested when potential uses of the (big) data are not yet known? Consent in open courses Are open courses (MOOCs etc) different when it comes to obtaining consent? Use of publicly available data Can institutions use publicly available data (e.g. tweets) without obtaining consent? Student access Student access to their data To what extent should students be able to access the data held about them? Student access to their analytics To what extent should students be able to access the analytics performed on their data? Data formats In what formats should students be able to access their data? Metrics and labels Should students see the metrics and labels attached to them? Right to correct inaccurate data What data should individuals be allowed to correct about themselves? Data portability What data about themselves can the learner take with them? Privacy Out of scope data Is there any data that should not be used for learning analytics? Access to employers Under what circumstances would it be appropriate to give employers access to analytics on students? Tracking location Under what circumstances is it appropriate to track the location of students at campuses? Staff permissions To what extent should access to individuals’ data be restricted within an institution? Unintentional creation of senstitive data How do institutions avoid creating "sensitive" data (e.g. ethnicity, religion) from other data sources? Use of metadata to identify individuals Can individuals be identified from metadada even if personal data has been deleted? Requests from external agencies What should institutions do when requests for student data are made by external agencies e.g. educational authorities or security agencies? Sharing data with other institutions Is it appropriate for institutions to share student data with other institutions in order to increase the dataset and enhance the analytics? Enhancing trust by retaining data internally If students are told that their data will be kept within the institution will they develop greater trust in and acceptance of learning analytics? Action Institutional obligation to act What obligation does the institution have to intervene when there is evidence that a student could benefit from additional support? Student obligation to act What is the student’s obligation to act on learning analytics designed to help them? Conflict with study goals What should a student do if the suggested advice is in conflict with their study goals? Obligation to prevent continuation Is there an obligation on the institution to prevent students from continuing on a pathway if analytics show that it is not in the student’s or institution’s interests for them to continue? Type of intervention How are the appropriate interventions decided on? Distribution of interventions How should interventions resulting from analytics be distributed among different stakeholders in the institution? Conflicting purposes of interventions How does the institution ensure that it is not carrying out multiple interventions whose purposes conflict? Staff incentives for intervention What incentives are in place for staff to intervene? Failure to act What happens if an institution fails to intervene? Need for human intermediation Are some analytics better presented to students via e.g. a tutor than via a system? Triage How does an institution allocate resources for learning analytics appropriately for learners with different requirements? Triage transparency How transparent should an institution be in how it allocates resources to different groups? Opportunity cost How is spending on learning analytics justified in relation to other funding requirements? Favouring one group over another Could the intervention strategies favour one group of students over another? Consequences of false information What should institutions do if it is determined that a student has given false information to e.g. obtain additional support? Audit trails Should institutions record audit trails of all predictions and interventions? Unexpected findings What infrastructure is in place to deal with something unexpected arising in the data? Adverse impact Labelling bias Does a student profile or labelling bias institutional perceptions and behaviours towards them? Oversimplification How can institutions avoid overly simplistic metrics and decision making which ignore personal circumstances? Undermining of autonomy Is student autonomy in decision making about their learning undermined by predictive analytics? Gaming the system If students know that data is being collected about them will they alter their behaviour to present themselves more positively, thus skewing the analytics and distracting them from their learning? Abusing the system If students understand the algorithms behind learning analytics will they abuse the system to obtain additional support? Adverse behavioural impact If students are presented with data about their performance, likelihood of failure etc. could this have a negative impact on their behaviour, leading to increased likelihood of failure and dropout? Reinforcement of discriminatory attitudes and actions Could analytics reinforce discriminatory attitudes and actions by profiling students based on e.g. their race or gender? Reinforcement of social power differentials Could analytics reinforce social power differentials and learners’ status relating to each other? Infantilisation Could analytics "infantalise" students by spoon-feeding them with automated suggestions, making the learning process less demanding? Echo chambers Could analytics create "echo chambers" where intelligent software reinforces our own attitudes or beliefs? Non-participation Will knowledge that they are being monitored lead to non-participation by students? Stewardship Data minimisation Is all the data being held on an individual necessary in order to carry out the analytics? Data processing location Is the data being processed in a country permitted by the local data protection laws? Right to be forgotten Can all data regarding an individual except that necessary for statutory purposes be deleted? Unnecessary data retention How long should data be retained for? Unhelpful data deletion If data is deleted does this restrict the institution’s ability to refine its models, track performance over multiple cohorts etc? Incomplete knowledge of data sources Can an institution be sure that it knows where all personal data is held? Inappropriate data sharing How can we prevent data being shared within or outside the institution with parties who have no legitimate interest in seeing it or may use it inappropriately? Risk of re-identification Does anonymisation of data becomes more difficult as multiple data sources are aggregated, potentially leading to re-identification of an individual later?
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:06am</span>
European experts came together last week in an icy Paris to review Jisc’s evolving architecture for learning analytics.  The event at L’Université Paris Descartes was jointly hosted by Apereo and Jisc.  Delegates included representatives from the Universiteit van Amsterdam, the Pädagogische Hochschule Weingarten in Germany, Surfnet in the Netherlands, CETIS and the Lace Project, as well as Jisc and Apereo. The architecture walkthrough involved participants taking on the following roles for the day: Oracle (knows everything about the architecture) Student Teacher Tutor Researcher Security expert Software architect Privacy enhanced technologist Front end developer Federative log on expert Enterprise service bus expert Data governance expert Writer Chair It turned out to be a very effective way of getting some in depth constructive criticism of the model, which Jisc is using to procure the components of a basic learning analytics system. Michael Webb takes us through the Jisc learning analytics architecture Consent service The workshop was organised in conjunction with the Apereo Europe conference and an event the following day around ethics and legal issues. It was interesting how almost immediately the architecture session got caught up in issues relating to privacy. These were of such concern to the Germans present that they believed their students wouldn’t be prepared to use a learning analytics system unless the data was gathered anonymously.  Once learners had gained confidence with the system they might be persuaded to opt in to receive better feedback. Thus the consent service was confirmed by the group as a critical part of the architecture. Two use cases were suggested for the consent service: 1. students control which processes use their data, and 2. the institution wants to use data for a new purpose so needs to obtain consent from the students. I find myself wondering about the logistics here: what happens if the student has left the institution?  Will you risk having large gaps in the data which diminish the value of the overall dataset? One participant suggested that students could decide if they wanted analytics to be temporarily switched off - like opening an incognito window in a browser. This would allow them to have a play and do some exploration without anything being recorded. The logistics of building this into multiple systems though would certainly also be complex - and it would potentially invalidate any educational research that was being undertaken with the data. Students may be worried about privacy, but handling the concerns of teachers was also felt to be crucial.  It was suggested that statistics relating to a class should remain private to the teacher of that class; concern was expressed that learning analytics could be used to identify and subsequently fire ineffective teachers.  "Could a predictive model allow unintelligent people to make decisions?" was the way the participant with the "teacher" role summed up the perennial battle for control between faculty and central administators. One suggestion to minimise privacy infringements was to use the LinkedIn model of notifying you when someone has looked at your profile. Certainly every time someone views a student’s data it could be logged and be subsequently auditable by the student. Jisc learning analytics architecture v2.0 Student app One idea was for the student app to use an open API, allowing other student-facing services to be integrated with it. Another issue raised was that most analytics is carried out on data sources which can be fairly "old" however there may be a need for realtime learning analytics. And a student app which assessed whether learning outcomes had been achieved could also be very useful. One of the most interesting ideas mooted was this: could the most important source for predictive analytics be "self-declared" data? It might be that some wearable technology monitoring your sleep patterns or your exercise levels for example could be mapped onto your learning performance. Or you might want to log the fact that you’d watched several relevant youTube videos that you’d discovered. Learning record store Concern was expressed around the performance of dashboards when required to process big data. Thus the ETL (extract, transform and load) layer is crucial to determine what data is stored in the learning records warehouse. Alert and intervention system This should not only be in place to help those at risk but should also allow the teacher to analyse how well things are going on overall in the class. Interventions might be to congratulate students on their progress as well as to address potential failure or drop-out. Learning analytics processor This was deemed to be so critical that it should form a separate layer, underpinning the other applications. Meanwhile compliance with the Predictive Modelling Markup Language has already been specified by Jisc as a requirement for the predictive models to be used by the learning analytics processor. But one member advised us to be wary of "gold-plated pigs" - some vendors are great at presenting beautiful apps and dashboards which may have shaky underlying models and algorithms doing the predictions. Most staff are unlikely to want to know the fine detail of how the predictions are made but they will want to be reassured that the models have been checked and verified by experts. Standards The use of technical, preferably open, standards is going to be important for an architecture comprising a number of interchangeable components potentially built by different vendors. The Experience API (Tin Can) has been selected as the primary format for learning records at the moment; it should be relatively easy to convert data from the LMS/VLE to this format, and some plugins e.g. Leo for Moodle already exist. However there may be a maintenance overhead every time each LMS is upgraded. It was suggested that the IMS Learning Information Services (LIS) specification would be appropriate for storing data such as groupings of individuals in relation to courses.  There is already reportedly a LIS conversion service for Moodle. The problem of ensuring universally unique identifiers for individuals (and activities) was also noted. Security Our "cracker" (security expert) was concerned that security issues will be complex because of the number of different systems in place.  Meanwhile there’ll be ongoing requirements for patches and version updates for the different LMSs and student information systems.   Have people given up on tablets? Conclusions We were reassured that the architecture, with some minor changes suggested by the group, is robust, and we’ll be aiming to put in place a basic "freemium" learning analytics solution for UK universities and colleges by September. We hope also that this will contribute to international efforts to develop an open learning analytics architecture and we look forward to working with other organisations to develop it further in the future. I’m grateful to all the participants who made such useful contributions on the day and in particular to Alan Berg of Apereo and the University of Amsterdam for initiating this workshop and doing the lion’s share of the organisation.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:06am</span>
Jisc’s Learning Analytics Network got off to a great start last week with a workshop hosted by the University of East London.  The event was fully subscribed, with around 50 people attending from 31 different institutions, showing the high level of interest in the area in the UK. Staff and students very positive about dashboards at Nottingham Trent Mike Day, Director of Information Systems at Nottingham Trent University, gave the first presentation. NTU is particulary advanced in its use of learning analytics.  Mike discussed how his university already has good retention levels but wanted to use data to better inform interventions, improving attainment and a sense of belonging for students.  A dashboard was built using HP Autonomy and is now in use across the institution, combining biographical information with data sources such as door swipes, library loans and VLE use.  This enables comparison of engagement across a cohort and raises alerts if students appear to be disengaged. Mike Day discusses Nottingham Trent’s pioneering work in learning analytics Students are "strongly positive" about the dashboard, with 93% of them wanting to be warned if they’re at risk of failure.  Staff are also very positive.  The dashboard confirms that engagement is the strongest predictor of progression, and shows that groups with historically poorer progression and attainment do have different levels of engagement. For these groups engagement is a stronger predictor than demographics. Analytics to improve retention at Huddersfield Next up was Sarah Broxton, Strategic Planning Officer at Huddersfield University, who presented on Huddersfield’s work to improve retention with the use of data.  Despite Huddersfield’s improving NSS scores and league table positions there has been a strategic requirement to improve retention rates and institutional effectiveness and efficiency.  Meanwhile attendance monitoring and a centralised timetable system have been introduced, and there’s a need to inform staff better about the data available to them. Mapping leaver characteristics such as age and entry qualifications to current cohorts, together with attendance data, reports of students most likely to leave early were produced, and communicated to personal tutors and other staff, encouraging them to get in touch.  As with other large IT projects, Huddersfield found the technical issues relatively easy to solve - it’s changing human practices and processes that creates the challenge.  However through increased transparency and training for colleagues acceptance of learning analytics is increasing. Trying a "skunk-works" approach Roger Greenhalgh, Jisc’s Strategic IT and Organisational Leadership Advocate, then talked about a "skunk-works" approach to analytics.  Roger showed how small innovation units created within organisations, but relatively free from procedures and regulation, can develop analytics more quickly than traditional IT departments. Engagement analytics at the University of East London Gurdish Sandhu and Simon Gill from the University of East London, discussed their Information Strategy and Student Engagement Analytics, combining attendance monitoring data with VLE usage data and other sources to indicate students at risk of drop out.  The University uses QlikView and is at the forefront of deploying useful dashboards for a range of learning analytics applications. Jisc’s work in the area After lunch I presented with Paul Bailey and Michael Webb on Jisc’s activities in the area, discussing the architecture for a basic learning analytics system which is currently being procured, some of the ethical and legal issues for a code of practice for learning analytics, and plans for a student app. Paul also announced that Jisc will provide funding to the Network for three small learning analytics projects of £5k each to be run from June to September, reporting back at the end of the year.  Network members will decide on which proposals should be supported. Group work The final session of the day involved splitting into groups to discuss some of the issues of most concern to institutions, notably: Interventions: we need to be open and transparent about these - and it should be clear when they will happen.  Accurate interpretation of data is essential.  The student needs proper support in order to take any suggested actions.  Meanwhile, messages to students need to be managed carefully so they don’t have a negative effect.  The intervention should be captured and measured so that institutions can find out what works. Institutional adoption: in order to develop and roll out analytics at institutions the following need to be considered: Identify the stakeholders Fit any analytics project work into the institution’s business planning cycle - use something like a "quality of learning and teaching forum" Ensure that senior management sponsorship is secured and that learning analytics is prioritised against competing projects Put mechanisms in place to interpret the analytics and define interventions Convince academics and tutors that there’s something in it for them Identify genuinely valid analytics - not just things we’d like to do Identify the risks of learning analytics A more practical suggestion was for Jisc to develop a checklist for organisations on how to implement learning analytics - including the "elevator pitch" to senior management. There was a tangible  enthusiasm among those present about the potential of learning analytics to improve the student experience, and we’ll be planning further events soon.  To stay informed about future events you can subscribe to the analytics@jiscmail list.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:05am</span>
What data and analytics should be presented directly to students? This was the subject of a workshop on 27th Feb in Jisc’s London offices, attended by staff from across UK higher education.  We also had with us a couple of students with a keen interest in the area. In advance of the meeting members of Jisc’s Student App Expert Group had anonymously provided a range of potential requirements, which I then grouped and used as the basis for the workshop (they’re included at the bottom of this post for information). The first area is around information provision to students, and comprises functionality for: Monitoring your academic progress Comparing your progress to others or to your prior performance Monitoring your engagement with learning Providing useful information such as exam times and whether there are free computers available The second area is concerned with action - the student actively entering information or doing something to enhance their learning. It consists of: Prompts and suggestions Communication facilities with staff and students The upload of personal data Providing consent to what data is used for learning analytics Various other issues were suggested relating to the interface (e.g. ensuring it is easy to use), ethics (e.g. being aware that the app can only ever give a partial view of student progress), and data (e.g. accepting data from a wide range of sources). During the day, groups discussed a number of these areas for functionality. For each we defined an idea, a purpose, benefits, drawbacks & risks, and presentational aspects. Some of these ideas are fairly wacky and might not survive further interrogation or prioritisation but here they all are for the record. The next stage is to run the ideas past students themselves to find out what they want to see in an analytics app. How engaged am I? The most common application of learning analytics is measuring student engagement. Putting this information in the hands of the learners themselves could help to reassure those who feel they’re on track and prompt those who aren’t engaging. There’s always the risk of course that students will game the system to achieve better engagement ratings without actually improving their learning. However it could also result in them finding the library, attending more lectures, using the VLE more or reading more books. An idea for presenting this information was to show overall engagement on a scale of 1 to 5. Clicking the indicator would result in a further breakdown for e.g. library usage, lecture attendance and VLE usage. VLE usage might be further broken down if required, showing forum participation perhaps if that was considered important. Data could be shown by module as well as across modules. Compare my engagement Learners’ engagement could be compared with that of their peers or even their own past performance. Again this could be potentially motivating and inspire students to change their behaviour. The risks include being demotivating, falsely equating engagement with success, and privacy issues e.g. the identification of individuals on small cohorts from anonymous data. How am I progressing? The aim here is to gather and surface academic progress indicators and to identify actionable insights for the student. Timely information would aim to change their behaviour and improve achievement. Having all the information in one place would be beneficial but would there be enough information to enable students to take action? One risk is that this could "kill passion" for the subject and further divert effort into assessed or measured activities. Providing context would also be important - a grade without feedback may not be helpful. It also could be counterproductive for high performing students. Meanwhile raised and unfulfilled expectations could result in worse feedback for institutions on the National Student Survey. Data could be presented on a sliding scale, showing whether they were likely to pass or fail and allowing them to drill down into more granular detail on their academic performance. Compare my academic progress This functionality would allow students to compare where they were in key activities with previous cohorts and with peers. It could aid those who lack confidence and help them to realise that they are doing better than they realised. Of course it could also damage their confidence. Another risk is that the previous cohort might be different from the current intake or the way the course is being taught might have changed. My assessments A possibility would be to show analytics on what successful students do and how your actions compare e.g. if students submit assessments just before the deadline are they more likely to fail? This might result in students being better prepared for their assessments. My career aspirations The aim here would be to help understand whether the student is on track to achieve their chosen career based on records of previous learners. This might include networking opportunities with students who have already followed a similar path. It might help to increase engagement and with module pathway planning. Students could talk about their skills and better understand how to quantify them. Meanwhile suggestions such as "you need to know about teamwork" or "identify opportunities for voluntary work" could be provided. The app might also suggest alternative career paths or that a student is on the wrong one e.g. "your maths does not appear to be at the level required for a nuclear physicist". Risks include that the app could be overly deterministic, restricting learner autonomy - and that students would need to ensure that their data was up to date. Plan my career path A related possibility is showing what educational journey a student needs to take to achieve their intended career, helping them to avoid the wrong choices for them e.g. what does the life of a midwife look like and what was their educational journey to get there? My competencies Another idea discussed was to enable students to monitor their competencies and reflect on their skills development, perhaps through some sort of game. This could encourage them to engage better with the materials and with their cohort. Again this wouldn’t of course guarantee success. My emotional state Enabling students to give an idea of their emotional state in some way would allow them to gauge how they were compared to their peer group, and to provide better feedback to the institution or to tutors.  This is highly personal information of course and you might want it to be visible to you only, unless it is anonymised. Why I didn’t attend The app could allow students to input their reasons for non-attendance e.g. "I didn’t attend this lecture because I had my tonsils out" and "but while recovering in hospital I watched the lecture video and read the lecture notes". This might enable the adaption of engagement scores so that students felt they reflected the real situation. Communicate We looked at whether the app should include communications facilities around the analytics. This might between students and tutors or perhaps with peer mentors. There was concern that this might be mission creep for the app however integrating communications around the interventions taken on the basis of the analytics might be useful. The app could also provide information about opportunities for communications around student support, with personal tutors, study buddies, peer mentors or clubs. There would be potential for communications based on the programme rather than just the module, and the functionality might for instance be used to facilitate the take-up for personal tuition. The tools available might depend on the level of the students e.g. encouraging those on a one year taught Masters. One issue raised was that there would be student expectations of a quick response, and this might result in even more email "tyranny" for academics. Link app to my social media accounts The idea here is to enable students to link the app to a Twitter, LinkedIn or other social media accounts so that you can send status updates from the student app. This would enable the aggregation of for example of Twitter feeds from all those on the module with Twitter accounts, allowing learners to connect better with others. The institution could use the data for sentiment mining and updates could be fed to the lecturer, even while they’re giving the lecture. Give my consent for learning analytics In order ensure the legal and ethical collection and use of student data for learning analytics, a key part of the learning analytics architecture Jisc is commissioning will be a consent system, which is likely to be controlled from the student app. This could be particularly important in some of the more personal applications such as linking to your social media accounts or inputting your emotional state. It will also help users to understand what is being done with their data, feel a sense of control over the process, and help to reduce concerns that data could be misused. It would allow students to control any third party access to their data e.g. by future employers. My location Providing geolocation data to the app could have a number of applications such as helping vulnerable students to feel safer, campus mapping and self-monitoring. It could help institutions by enabling the tracking of the use of services. Students might also be prompted to attend campus more or spend more time in the library. This does of course have privacy implications and access to location data would need to be strictly controlled (by the student). It would also generate large quantities of data. Fun analytics The aim here would be to motivate and engage, and to get students to use the app, by providing fun or amusing analytics. Options discussed included "calorie burner info" e.g. "you read 2 articles today and used 5 calories"; a campus induction game; weekly challenges based on activity and studies; and a badge system of rewards. Where next? A recommendations engine could be presented through the app, providing relevant offers, signposting and information to students. Again this could potentially result in increased engagement, driving students to helpful services. On the downside it could be intrusive, add to information overload, and be used for marketing rather than benefitting learning. Information could be presented on what’s trending, forthcoming local events, and silly facts e.g. "30% of students who eat here get a 1st class honours!" This could help students to be better informed and prompt them to do something they might not have before. My students union Increased engagement with the students union can help learners to feel better connected so the app could also be used to facilitate this by showing events and information - and potentially engage them more in the democratic process. Car park We parked a number of ideas during the day to return to perhaps at a later stage, including: assessment regulations, tutor performance, data literacy, the naming of the app, and how we get disengaged students to use it. Suggested functionality for the app The following possible functions were suggested by members of the Student App Expert Group in advance of the session and then expanded on in the discussions, as summarised above. This provides a good checklist of what we might wish to consider including: Information Monitoring academic progress Progress. What percentage of the course materials, activities, formative assessments etc. have you done? Student should be able to see their progress with clear indication whether they are at risk or not Show students their academic progress, at a granular level: what marks they have for each assignment and how that contributes to their overall progress Ability to track own academic progress - get marks, compare own marks across modules and years Monitor student progress (provide overall picture of student performance and alert to potential problems) Could there be an area showing their student performance? Real-time, or near real-time updates on progress At a glance views of progress against criteria (such as assessment), links to personal progression tracking, and ‘useful’ traffic-light style Overview of essay marks, including marks for research skills, writing skills, originality etc. -&gt; ability to compare to previous essay marks Access to formative and summative marks, and feedback Performance data: grades Performance data against entry profile and previous performance An integrated view of a student’s study career, from the programme level to the course/module level What does the rating mean? Comparing academic progress Academic "performance" in relation to others on cohort, possibly to previous cohorts and grade predication Crucially, should be able to compare their data both to themselves over weeks/months/years of study, but also to the ‘average’ behaviour of the cohort with whom they study. Answering the question: "Am I in line with my cohort, both now and preferably historically too?" Comparison. How is your progress compared to others - in the class, best in class, last year’s class etc? Leaderboards? Actually I hate them but my research shows that for some classes of student they do encourage engagement. Benchmarking the student academic performance with peers Ability to compare essay marks to average marks of cohort Where would 1st class degree attainment be on the line - and 2nd class, 3rd class and so on? Monitoring engagement Look at interactions/activity they have taken part in on VLE and/or other systems, number of journals accessed online/in the library Activity data on attendance, VLE usage and library…and if there are appropriate comparators then that Useful information A ‘calendar plus’ function that tells you not just what your lectures are for the day, but what other activities there are around campus - sports classes, clubs, if certain lecturers have office hours, if there are free computers in the computer lab, etc. Needs to both respond to where you are on campus, as well as make suggestions based on how much time you have to spare as well as where you are at the moment. For example, ‘You have an hour until your next lecture - why not boost your ‘library score’ and visit there for a little while, or go talk to Professor Blogs since she has office hours’. Have information on the university’s important events and useful revision techniques Easy and better access to learning resources Action Prompts and suggestions Student should know what to do next Provide a visual representation of the chosen metric at a granular enough level that student activity can clearly precipitate change Potential outcomes and necessary effort - predict 2:2 do this well here and here and get a 2:2 Recommendations of training courses and resources based on essay marks Prompts to regular self-assessment of research skills, writing skills, presentation skills etc. -&gt; allow students to take responsibility for learning/progress Provide students with a ‘to do’ list, showing what they have to do and by when. The difficulty here is making it all inclusive Right now immediately after reading this text, what do you expect students to actually do Give students access to people who can help them and identify the specific kinds of help that can be provided Tips to improve performance, what to do next Gives information on how to improve not just/only status Diagnostics. The system should be able to see where I’m not doing well and point me to support materials. E.g you don’t seem to be doing well at this bit if the syllabus - or you don’t seem to be doing well at more analytic questions… A recommendations aspect based on past use (and how others behave) - based on this module/this paper/this time of studying, we recommend that you consider this topic/this other article/this prime study time Have information on ways they could improve their student engagement Communication A ‘question’ function to send concerns to the academic personal tutor or other intermediary Identify effective communication strategies Facilitating interactive and better Communications with academic and admin staff Ability to communicate between staff-student, student-student Upload of personal data Ability to load personally captured data to provide context and information Allow students set their own notifications - which may be alerts, reminders, or motivating messages - triggered by circumstances of their choosing. (Making good decisions about this would need facilitation, but would help towards metacognitive awareness and understanding of the data and the software themselves). Consent A way of opting in or out of sharing the data, or aspects of the data, with staff Granular control of who sees what - controlled by the student Other issues Interface The student app should be easy to use Easy access to visual information Provide a visual representation of the chosen metric at a granular enough level that student activity can clearly precipitate change Whatever the outcome is for the learning analytics app, I’d try and keep the core interface simple. I’d personally prefer one graphic ultimately, but I’m sure there are arguments for a range of options cross platform -brandable (the students may know their institutional brand but perhaps won’t respond to something plastered in jisc branding) Access to the underlying data, but also good conceptually-straightforward visualisation of that data. Analytics visualisations that will prove compelling for students to visit the app. Cross platform /device so all can access Ethics Transparency about the gaps - the app should avoid over-determining - or giving the impression of over-determining - students’ progress and achievement based on data, which is inevitably an incomplete representation of learning but which may carry more weight than the ineffable or unrecorded moments of learning. Data sources Accept data from a range of simple or aggregated end-points - I appreciate it is likely to accept a feed from the Jisc basic LA tool, but it would be useful if we could provide a feed from the basic data we have in Blackboard ASR Impact on teaching Identify effective teaching and assessment practices
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:04am</span>
Jisc is currently procuring the different elements of its architecture for a basic learning analytics system which we plan to make available to UK colleges and universities later this year.  In this video I explain how it all fits together. The service will consist of the following components, and institutions will be able opt in to use some or all the components as required: A learning analytics processor - a tool to provide predictions on student success and other analytics on learning data to feed into student intervention systems. A staff dashboard - a presentation layer to be used by staff in institutions to view learning analytics on students. Initially this presentation layer will be focussed on the learner but dashboards for managers, librarians and IT managers could also be developed. An alert and intervention system - a tool to provide alerts to staff and students and to allow them to manage intervention activity. The system will also be able to provide data such as methods and success, to be fed into an exemplar "cookbook" on learning analytics. A student app - based on requirements gathering with staff and students.  Integration with existing institutional apps will be supported. A learning records warehouse - a data warehouse to hold learning records gathered from a range of institutional data sources. We will define an output interface and also support integration with a common set of institutional systems. When will it be available? A procurement process is underway, proposals from suppliers have now been received and we are at the selection stage to appoint suppliers to develop each of the components of the learning analytics solution. The agreements will be in place in early May. The expectation is that a basic learning analytics system consisting of the processor, dashboard and warehouse will be in place to pilot with universities and colleges from September 2015. The other components will be developed over the next 6-12 months.  A full production service will be provided if the pilots prove successful and popular from September 2017.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:04am</span>
Jisc’s draft Code of Practice for Learning Analytics is now available for public consultation for the next two weeks: Code of Practice for Learning Analytics v04 [MS Word].  I’ve already had some very useful comments back from a number of invited organisations and individuals which will help to enhance the document and the accompanying online guidance. The background to the Code is provided in an earlier blog post.  The deadline for comments from the public consultation is 5th May.  These will then be presented to the Code of Practice Advisory Group which will agree a final version. I’d be very grateful for any further thoughts from individuals or organisations either by commenting on this blog post or emailed to me at niall.sclater {/at/} jisc.ac.uk We intend to release the final version of the Code in June and will be developing accompanying online guidance and case studies over the coming months.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:04am</span>
In February we ran a workshop in London with university staff and a couple of students to gather requirements for a student app.  I’m now carrying out sessions directly with students to find out what they would find most useful.  Yesterday I had the pleasure of visiting the University of Lincoln at the invitation of Marcus Elliott. The students were from a variety of levels and backgrounds, ranging from engineering to drama. MAB Main Admin Building (Credit - University of Lincoln) Most of them had little idea of what learning analytics was about so I introduced the session by describing a few things that were being done in the area - attempting not to influence their thinking too much. Marcus and I had agreed that we were better starting with a blank slate and then looking at whether there was any common ground with the conclusions of the London workshop. As with the previous event it was a challenge to keep the group focussed on the applications of learning analytics without straying into all the useful things that apps could do for students.  I felt it was better though just to let the ideas flow, and not to impede the creativity in the room.   The students came up with ideas for functionality, put them on stickies, and discussed them with a partner.  Then they all came together and spontaneously grouped the ideas into four categories: academic, process of learning, social integration and system monitoring / institutional data. At this stage we didn’t want to look too much at presentational issues however we provided the students with blank smartphone screen templates to scribble on in order to focus them on what the functionality might involve.     Inevitably there was a focus on assessment and, as with the London workshop, up to date data on grades was thought to be one of the most useful things a student app could provide. Is this learning analytics?  I don’t know - but ideas such as showing your ranking in the class and being able to manage processes from this screen such as clicking to arrange a tutorial would certainly be useful. Other ideas included calendar reminders of assessment due dates and exams.   The app could provide a one-stop shop for all of a student’s results. It could also show what percentage of assessments the student has completed but also what grades they need to obtain in future assignments in order to receive different levels of degree award.             Better feedback to students from their lecturers was also thought to be something the app could facilitate. This student neatly links personalised feedback to more detailed suggestions on how to improve particular skills e.g. academic writing skills and options for self-development such as links to help sessions which could be placed directly into the student’s calendar. Giving real-time feedback to lecturers rather than waiting till it’s too late via student surveys was another option. This could help speed up improvements to courses.         Providing reading list functionality was also popular with the attendees. Here students are presented with metrics showing how much they’re engaging with the reading list on each of their modules. Reading list functionality could also include reviews, comments and recommendations from other students (perhaps building on the features of goodreads). They also suggested Amazon-style recommendations for reading e.g. "if you liked x you may like y".         How you spend your time was another application which the students thought could be useful.  This example shows the percentage of time spent by the student on various activities. The data itself could be assembled from timetables, calendars, geo-location and self-declared activity. Recommendations on how much time should be spent on different activities could be another helpful feature.               Managing event attendance was another popular option. The student could be contacted about societies and social events, workshops, guest lectures etc - all of which would be based on their interests, which they could also specify via the app. This would cut down on the amount of "spam" messages from the University which they say have led to many students not bothering to read their emails. You could invite people to events you are organising, or push events to their app - again based on the interests they have specified. Rating events would also be a useful feature. If analytics determined that a student was becoming disconnected the app could introduce them to opportunities such as open day volunteering. There was a suggestion that University and Student Union data could be combined to suggest such opportunities based on career aspirations and interests. Another option is to use the app to check-in to lectures, perhaps automatically using geo-location, and to enter reasons (such as illness) for non-attendance.  There could also be notifications on lecture cancellations. The app could contribute the events you attend to a portfolio of attended lectures. Finding other students with similar or complementary interests was a popular suggestion too. This idea came from a postgraduate student who recognised the value of interdisciplinary contact so that you could look for someone to help you in an area you were less familiar with. You could specify what skills you have on offer and what you’re looking for assistance with.   Though we didn’t ask her to do it, showing how the different functions of the app could be accessed was important to this student in order to understand how everything would fit together. Another generic suggestion was that the app should keep you logged in all the time.             So all in all some great suggestions from this group of students in Lincoln.  Some of them aren’t what are normally considered learning analytics applications but they all rely on data - some of it new data such as students being able to specify their interests in more detail in order to receive targetted materials and details of events. There’s a lot of complementarity with what staff thought of in the London workshop.  It’ll be interesting to see now what students at other institutions come up with.  
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:03am</span>
On a fabulous spring day last Friday around 44 people made it to Edinburgh for the second meeting of the Learning Analytics Network, jointly organised by Edinburgh University and Jisc.   I suspect a number of us may have been somewhat bleary-eyed after witnessing the political landscape of the UK being redrawn during the previous night.  However the quality of the presentations and discussion throughout the day seemed to keep everyone awake.  It was particularly interesting to hear about the various innovations at Edinburgh itself, which is emerging as one of the pioneering institutions in learning analytics. Edinburgh’s visionary approach is highlighted by the recent appointment of Dragan Gašević as Chair in Learning Analytics and Informatics, and it was great to have Dragan give the first presentation of the day: Doing learning analytics in higher education: Critical issues for adoption and implementation PDF - 1.21MB. Dragan outlined why learning analytics is increasingly necessary in education and examined some of the landmark projects so far such as Signals at Purdue and the University of Michigan’s development of predictive models for student success in science courses. In an Australian study Dragan was involved with he found there is a lack of a data informed decision making culture in universities and that, while researchers are carrying out lots of experimentation, they are not focussed scaling up their findings. Finally Dragan looked at ethics and mentioned the Open University’s policy and Jisc’s (soon to be released) Code of Practice for Learning Analytics. Next up was Sheila MacNeill on Learning Analytics Implementation Issues (Presentation on Slideshare). Sheila gained expertise in learning analytics while working for Cetis and has now been attempting to put this into practice at Glasgow Caledonian University. On arriving at the institution 18 months ago she found it was difficult to get to grips with all the systems and data of potential use for learning analytics. She started by identifying the areas: assessment and feedback, e-portfolios, collaboration and content.  This data is hard to get at and needs a lot of cleaning to be able to be used for learning analytics. Sheila’s summary slide outlines the main issues she’s encountered: Leadership and understanding is crucial - you need both a carrot and stick approach. Data is obviously important - ownership issues can be particularly problematic. Practice can be a challenge - cultural issues of sharing and talking are important. Specialist staff time matters - learning analytics has to be prioritised for progress to be made. Institutional amnesia can be an issue - people forget what’s been done before and why. Zipping back to the East Coast, Wilma Alexander talked about Student Data and Analytics Work at the University of Edinburgh (PDF - 866kB).  She discussed attempts to use Blackboard Learn and Moodle plug-ins for learning analytics, finding that neither of them were designed to provide data to students themselves. They then collected 92 user stories from 18 staff and 32 students. Much of what people wanted was actually already available if they knew where to look for it. Students wanted to understand how they compare with others, to track their progress, and to view timetables, submissions and assessment criteria. The next presenter, also from Edinburgh, was Avril Dewar: Using learning analytics to identify ‘at-risk’ students within eight weeks of starting university: problems and opportunities (PPT - 351kB). Avril discussed her work at the Centre for Medical Education to develop an early warning system to identify disengaged students. 80% of at-risk students were identified by the system. Metrics included: engagement with routine tasks, completion of formative assessment, tutorial attendance, attendance at voluntary events, and use of the VLE. Yet another Edinburgh resident, though this one working for Cetis rather than the University, was next.  Wilbert Kraan presented on The Feedback Hub - where qualitative learning support meets learning analytics (PPT - 1.86MB). The Feedback Hub is part of Jisc’s Electronic Management of Assessment project, working with UCISA and the Heads of eLearning Forum. It aims to provide feedback beyond the current module, looking across modules and years.  Wilbert proposed that feedback related data could be a very useful input to learning analytics. My colleagues Paul Bailey and Michael Webb (most definitely neither from Edinburgh) and I (from Edinburgh originally!) then updated attendees on progress with Jisc’s Effective Learning Analytics programme (PDF - 318kB).  In particular we described the procurement process for the basic learning analytics system (which will be the subject of further publicity and another imminent post on this blog) to be made available freely to UK universities and colleges.  We also discussed the Discovery Stage where institutions can receive consultancy to assess their readiness for learning analytics. Paul concluded by mentioning the next Network Event at Nottingham Trent University on 24th June (Booking Form). Later we had several lightening talks, the first from Prof Blazenka Divjak of the University of Zagreb, though currently a visiting scholar at, you guessed it, the University of Edinburgh.  Blazenka presented on Assessment and Learning Analytics (PPTX - 385kB). She’s found the main challenge in learning analytics to be the management and cleansing of data.  She discussed two projects undertaken at Zagreb.  The first examined the differences in performance between groups based on gender, previous study etc. The second analysed the validity, reliability and consistency of peer assessment.  She demonstrated visualisations which allow students to compare themselves with others. Paula Smith from Edinburgh gave another interesting lightening presentation on The Impact of Student Dashboards.   She reported on an innovation in their MSc in Surgical Sciences which expanded on existing tracking of students via an MCQ system to create a student dashboard. This allowed them to monitor their performance in relation to others, to provide information on at-risk students to staff and to evaluate the impact of any interventions that took place as a result. Most students welcomed the dashboard and many thought they would want to view it monthly. Finally, Daley Davis, from Altis Consulting talked about what his company is doing in Learning Analytics (PDF - 663kB). Altis is an Australian company and Daley discussed how Australian institutions are extremely focussed on retention due to the funding regime. Working with the University of New England, Altis cut attrition rates from 18% to 12%.  A student "wellness engine" was developed to present data at different levels of aggregation to different audiences. Data used included a system which asked students for their emotional state. In the afternoon we split into groups to discuss the "burning issues" that had emerged for us during the day.  These were: Make sure we start with questions first - don’t start with a technical framework Data protection and when you should seek consent When to intervene - triage Is the data any use anyway? Implementing analytics - institutional service versus course/personal service Metrics and reliability Institutional readiness / staff readiness Don’t stick with deficit model - focus on improving learning not just helping failing students Treating cohorts / subject disciplines / age ranges differently Social media - ethics of using Facebook etc for LA Can’t not interpret data just because there’s an issue you don’t want to deal with Using LA at either end of the lifecyle Ethics a big problem - might use analytics only to recruit successful people Lack of sponsorship from senior management Essex found through student surveys that students do want analytics I’m immensely grateful to Nicola Osborne for her comprehensive live blog of the event from which this summary draws heavily. I wish there was a Nicola at every event I attended!
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:03am</span>
Today Jisc is launching the Code of Practice for Learning Analytics at the UCISA Spotlight on Digital Capabilities event here in the amazing MediaCityUK at Salford Quays. Developing this document was chosen by institutions as the number one priority for Jisc’s learning analytics programme.  The Code aims to help universities and colleges develop strategies for dealing with the various ethical and legal issues that may arise when deploying learning analytics. It’s a brief document of four pages and is available in HTML or in PDF. The development of the Code was based on a literature review of the ethical and legal issues. From this a taxonomy of the ethical, legal and logistical issues was produced. The Code was drafted from this taxonomy and is grouped into seven areas: Responsibility - allocating responsibility for the data and processes of learning analytics within an institution Privacy - ensuring individual rights are protected and data protection legislation is complied with Validity - making sure algorithms, metrics and processes are valid Access - giving students access to their data and analytics Enabling positive interventions - handling interventions based on analytics appropriately Minimising adverse impacts - avoiding the various pitfalls that can arise Stewardship of data - handling data appropriately The Code was developed in the UK context and refers to the Data Protection Act 1998 however most of it is relevant to institutions wishing to carry out learning analytics anywhere, particularly in other European countries which have similar data protection legislation. It can be adopted wholesale or used as a prompt or checklist for institutions wishing to develop their own learning analytics policies and processes. If you find the document helpful or feel that anything is unclear or missing please let us know. Keeping it concise was thought to be important but that meant leaving out more in-depth coverage of the issues. Over the coming months we’ll be developing an associated website with advice, guidance and case studies for institutions which wish to use the Code. Acknowledgements The process has been overseen by a Steering Group consisting of Paul Bailey (Jisc), Sarah Broxton (Huddersfield University), Andrew Checkley (Croydon College), Andrew Cormack (Jisc), Ruth Drysdale (Jisc), Melanie King (Loughborough University), Rob Farrow (Open University), Andrew Meikle (Lancaster University), David Morris (National Union of Students), Anne-Marie Scott (Edinburgh University), Steven Singer (Jisc), Sharon Slade (Open University), Rupert Ward (Huddersfield University) and Shan Wareing (London South Bank University). It was particularly good to have the student voice represented in the development of the Code by David Morris of the NUS. I’m also especially grateful to Andrew Cormack and Rupert Ward for their perceptiveness and attention to detail on the final draft. I received additional helpful feedback, most of which I was able to incorporate, from the following people (some in a personal capacity, not necessarily representing the views of their organisations): Helen Beetham (Higher Education Consultant), Terese Bird (University of Leicester), Crispin Bloomfield (Durham University), Alison Braddock (Swansea University), Annemarie Cancienne (City University London), Scott Court (HEFCE), Mike Day (Nottingham Trent University), Roger Emery (Southampton Solent University), Susan Graham (Edinburgh University), Elaine Grant (Strathclyde University), Yaz El Hakim (Instructure), Martin Hawksey (with other members, Association for Learning Technology), Ross Hudson (HEFCE), John Kelly (Jisc), Daniel Kidd (Higher Education Statistics Agency), Jason Miles-Campbell (Jisc), George Munroe (Jisc), Jean Mutton (Derby University), Richard Puttock (HEFCE), Katie Rakow (University of Essex), Mike Sharkey (Blue Canary), Sophie Stalla-Bourdillon (Southampton University), Sarah Stock (University of Essex) and Sally Turnbull (University of Central Lancashire). Finally, many thanks to Jo Wheatley for coordinating the production of the print and HTML versions of the Code.  
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:03am</span>
I’ve just submitted a paper to a forthcoming "Special Section on Ethics and Privacy" in the Journal of Learning Analytics (JLA).  The paper documents the development of Jisc’s Code of Practice for Learning Analytics through its various stages, incorporates the taxonomy of ethical, legal and logistical issues, and includes a model for developing a code of practice which could be used in other areas. A model for the development of a code of practice As an open journal the JLA suggests that authors publish their papers before or during the submission and review process - this results in the work getting out more quickly and can provide useful feedback for authors. So here’s the paper - and if you have any feedback it would be great to hear from you. Abstract Ethical and legal objections to learning analytics are barriers to development of the field, thus potentially denying students the benefits of predictive analytics and adaptive learning. Jisc, a charitable organisation which champions the use of digital technologies in UK education and research, has attempted to address this with the development of a Code of Practice for Learning Analytics. The Code covers the main issues institutions need to address in order to progress ethically and in compliance with the law. This paper outlines the extensive research and consultation activities which have been carried out to produce a document which covers the concerns of institutions and, critically, the students they serve. The resulting model for developing a code of practice includes a literature review, setting up appropriate governance structures, developing a taxonomy of the issues, drafting the code, consulting widely with stakeholders, publication, dissemination, and embedding it in institutions.  
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:03am</span>
Jisc’s Summer of Student Innovation in Birmingham Our app to display learning analytics to students themselves is taking shape.  After brainstorming requirements for the app with staff and students at a workshop in London and seeking further input from students at Lincoln University we obtained useful feedback on our draft designs from students at our Summer of Student Innovation (SOSI) session in Birmingham in early August. Continuing my student app design tour of England, I joined colleagues from Jisc and Therapy Box last week in Manchester to apply the same methodology to app design that our SOSI students have been using.  The technique, developed by live|work includes devising personas and user journeys, and competitor and SWOT analyses, defining business and technical requirements, walking through concepts with other teams, and the development and user testing of wireframes. This was a highly effective process, enabling us in a day and a half of intensive work to narrow down a large number of potential requirements to a manageable feature set, and to tackle key aspects of presentation and navigation. The result is a set of wireframes for the designers at Therapy Box to get their hands on before they start the build. A major influence on our thinking is the use of fitness apps such as MapMyRide and Google Fit: some of us are already avid users of these technologies. To emulate their addictive qualities in an app for learning is one of our aims. In developing the concepts we were informed by some assumptions which have emerged from earlier work: That students can be motivated by viewing data on how they’re learning and how they compare with others That making the app as engaging as possible is likely to result in greater uptake and more frequent access to it, with a corresponding positive impact on motivation That increased motivation is likely to lead to better learning - with positive impacts on retention and student success We do of course recognise that the app may have negative effects on some students who find it demotivating to discover just how badly they’re performing in relation to others. However there’s a strong argument that it’s not in these students’ interests to remain with their heads in the sand. Meanwhile if data exists about them shouldn’t we be helping students to see that data if they want to? Moving on from ethical issues, which I’ve covered extensively in an earlier post, six principles which we want to embody in the app are now explicit.  We believe it should be: Comparative - seeing how you compare with class averages or the top 20% of performers for example may provide a competitive element or at least a reality check, driving you to increased engagement. Social - being able to select "friends" with whom to compare your stats may add another level of engagement. Gamified - an app which includes an element of gaming should encourage uptake by some students. This may be manifested in the use of badges such as a "Library Master" badge for students who have attended the library five times. Private by default - while data that the institution knows about you from other systems may be fed to the app, the privacy of any information you input in the app yourself will be protected. However anonymous usage information may be fed back to the app backend. Usable standalone - by students whose institutions are not using the Jisc architecture. Uncluttered - the app should concentrate for the time being on learning analytics data and not be bloated with functionality which replicates what is already present in the VLE/LMS or in other applications. So let me now take you through the draft wireframes to show how these principles are being taken forward (- click the images to enlarge). When first logging in the student is able to select their institution from a pre-populated lists of UK universities. If the students’ institution is using other parts of Jisc’s learning analytics architecture, in particular the learning analytics warehouse, then much more data will be available to the app. For simplicity we’re ignoring for the time being the use case of a student signed up with more than one institution. But we’re incorporating functionality which we think will be of interest to students regardless of whether their institution has signed up. That’s setting targets and logging their learning activities, about which more later. While what should go into an activity feed or timeline is likely to be the subject of much debate and future educational research, we plan to integrate this dynamic and engaging concept, so essential to applications such as Twitter and Facebook. The wireframes are intentionally black and white and allow space to incorporate images but not the images themselves - in order to concentrate on concepts, layout and navigation at this stage. Here the images may be of your friends or badges awarded. We include examples of possible notifications such as "Sue studied for 2 hours more than you!" but at this stage make no comment as to whether these would be motivational, irritating or otherwise. Future user testing will help clarify what should be included here and how the notifications should be worded. The engagement and attainment overview mirrors what many fitness apps do: it provides an overview of your "performance" to date. Critically here we show how you compare to others. This will be based on data about you and others held in the learning analytics warehouse. It may include typical data used for learning analytics such as VLE/LMS accesses, library books borrowed, attendance records and of course grades. We’ll research further how best to calculate and represent these comparators or metrics. At this stage we’ve chosen to avoid traffic light indicators for example as these would require detailed knowledge of the module and where the students should be at a particular point in time. Now let’s see what happens when you click the More button. In the activity comparison screen you’ll see a graph of your engagement over time and how it compares with that of others. You can select a particular module or look at your whole course.  We’ll populate the drop-down list with options for who you can compare yourself with such as people on my course, people on this module and top 20% of performers (based on grades). Comparing yourself to prior cohorts of students on a module might be of interest in the future too. We may show a graph here with an overall metric for "activity" based on VLE/LMS usage, attendance etc. Or we may want to break this down into its components. The next feature of the app allows you to log your activities. This is some of the "self-declared" activity that we think students might want to input in order to gain a better understanding of what learning activities they’re undertaking and how much effort they’re putting into each. Let’s click Start an activity.       Starting an activity allows you to select the module on which you’re working, choose an activity type from a drop-down list such as reading a book, writing an essay, or attending a lab, and select a time period you want to spend on the activity and whether you want a notification when that period is up. A timer is displayed in the image box and you can hit the Stop button when you’ve finished.  The timer will continue even if you navigate away from the app.     Setting a target is the final bit of functionality we want to include in the app at this stage. Again this is building on the success of fitness tracking apps where you set yourself targets as a way of motivating yourself. In this example the user has set a target of reading for 10 hours per week across all their modules. The image will show a graphic of how close they are to achieving that target based on the activity data they have self-declared in the app. Navigation to your next target may be through swiping.   Setting a target involves selecting a learning activity from a pre-populated list and specifying how long you want to be spending on it. We added a "because" free text box so that learners can make it clear (to themselves) why they want to carry out the activity e.g. I want to pass the exam, tutor told me I’m not reading enough). Users may be more likely to select a reason from a pre-populated list than to fill in a text field but we’ll monitor this to see whether it’s being used. We’re also considering the use of mood indicators here to show how happy or otherwise you’re feeling about how you’re getting on with meeting your target. Lots of potential for comparing your mood with others, showing how it’s changed over time or even sentiment analysis by institutions if students want to share such information with them - but that’s one to tackle later. This doesn’t include all the screens we’ll need but we do now have a pretty good idea of initial functionality to be incorporated in the app, its layout and navigation. There’ll no doubt be a fair bit of tweaking before v0.1 is built but you should get the general idea of what’ll be available from the screens above. We make no attempt at this stage to incorporate predictive analytics, which might show for example if you’re on track to succeed or drop-out. That will come in future iterations as, no doubt, will some of the other great ideas people have come up with that we’re leaving out for this first version of the app scheduled for release in April 2016.
Niall Sclater   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:02am</span>
I will begin by distinguishing between what we call 'must know information' and 'good to know information'. Must know information is directly linked to the learning outcomes. This information must be presented upfront and the learner should not have to search for it. Good to information is information that the learner can view if he/she wishes to read a little extra about the topic. This information can be displayed as click to know text, hyperlinks, tabbed presentations and so on. This information must not be presented upfront as it is not crucial to the learning objectives. Another logic behind this is quite simple. Learners tend to miss clicking the other tabs, links, buttons. Therefore, any information that will influence the learning outcome must be presented upfront.
Archana Narayan   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:02am</span>
What is creativity? Thinking up something new/original? Not necessarily... The ideas we come up with are typically related to the knowledge we already have. No idea comes from nothing. Every idea is inspired by an old one or something your have read/seen/experienced. You either innovate an old idea or put few ideas together to come up with a more 'new' idea. Creativity is about thinking different, stretching the boundaries, trying things you haven’t done previously, or improvising on an exiting idea.Creativity is inspired by passion. When you love what you do, you come up better ideas to do it better. You need your own space and time to be creative. You also need to be free of tension, stress and pressure. Organizations must give its employees the space to think freely and the freedom to execute new ideas. This will encourage employees to be creative at work. Is creativity a skill? I think so. You can consciously work on being creative.Brainstorming helps hone your creative ability.Stay in touch with what’s happening around you. This could be news, movies, good books, music, and so on.Identify your personal space. You need room to think. Make this space for yourself.Get inspired by creative things around you. This could be people, things, words, ideas.Read, read and read some more. Read on varied topics. This will help open your mind.Discuss, debate, argue. Engage discussions with colleagues, friends, and family.Ask questions like ‘Why not?’ rather than accepting things as they are.Think you can and not you can’t. Sit with a notepad and list down various possibilities. You will never know if you can or can’t till you try it.If you think you are stuck and no creative juices are following, take a break. Do something that will help you relax and loosen up.Have confidence in yourself. Only if you are sure of yourself, will you try to do something different.Lateral thinking helps. Think beyond the obvious.Know your facts/stuff.Not all ideas are doable. But noting them down will help you filter and build on the idea that will work for you. What hinders creativity?IgnoranceLack of confidenceNoise and crowdWorking mechanically with no thoughtRigidityLazinessNarrow mindednessYou can see creativity everywhere. You can see this in the way Tupperware boxes are designed, specific advertisements that capture our interest, the choice of clothes we wear, interiors of your house, and so on.
Archana Narayan   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:02am</span>
Why are gain attention strategies important? Within the first few minutes of the training program, the learner decides whether this course is worth his/her time or not. An effective gain attention strategy has the power to increase the motivational level of your learner.Using this, you can:Arouse their curiosity.Make them think about a particular concept.Make them laugh or break the ice.Help them grasp what is going to be covered in the course.Build expectations.Basically, it will make your learner want to see what comes ahead. Imagine! The learner is actually interested in learning. He/she is going to give you and your training program a chance. What qualifies as a gain attention strategy?1. A pre-test that tells them where they stand at the beginning of the courseFor example: Before we begin this module, let us attempt a brief questionnaire to identify your personality trait.2. A challenge thrown their wayFor example: You are a customer care executive at a call center. You have several customer calling you for information. You need to provide them with the information they require and close the close quickly to take the next call. How many calls can you close by end of day?3. A problem-solution approachFor example: You have been appointed as the manager at SimCom. You manage a team of six smart and talented people. Your teams performance has been very poor over the past few months. You need to motivate your team and ensure that each person gives his/her best to this project. Your boss is keeping a close eye on you. Good luck!4. A statistical reportFor example: Attrition rates are within the range of 30-60% in the BPO industry. The typical reasons for attrition are salary, work timings, better jobs, and so on.5. Did you know?Did you know that The Big Five is a group of animals of Africa: cape buffalo, elephant, leopard, lion and rhino. This term was coined by hunters because of the challenge of hunting these wild ferocious animals when cornered.6. A comic strip7. A story/dramaYou are a detective. Weird things have been happening at Reth City. Reports show that the number of males have been accelerating rapidly and no one seems to know the reason behind this. You have been offered this case. You need to go to the city to understand what is happening. You can talk to the city dwellers. If they seem secretive, you can look around the city for clues. Your assistant, Shweta, will hand out reports, newspaper clippings to help you crack the case. Hurry!You can think of several innovative ways to design grab attention screens. If you have come across any, share them.
Archana Narayan   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 01:01am</span>
Every Tuesdays and Fridays, we have learning sessions presented by each one of us. This week, it was my turn. I will be presenting on Multi-User Virtual Environments (will blog later on this). As for all presentations, I was reading up on this topic. During my coffee break, I picked up MetroPlus (Hindu) and read the first article, Trapped in the Net. This post talks about Internet addiction disorder (IAD), "...pathological use of computers, to engage in social interactivity.""It is becoming common to know of someone, or have heard of someone, who has become obsessed with online activity to the point that their alternative online lives have masqueraded - and in some cases completely dominated - their identities. " "Broken marriages, lost jobs and plunging college grades are just some of the things that people who spend upto 18 hours per day in virtual reality face."Interesting, isn't it? These quotes had me thinking of Second life. This is a multi-user virtual environment in which you can create an avtaar for yourself. This environment has its own economy (can you believe it?); the currency is Linden. You can buy and sell stuff in this environment. It must be so easy to dissolve yourself completely into this virtual environment that depicts real life through the eyes of the user. The avtaar is probably everything you want to be and are not. It is the ideal person that you want to be.Now, coming to the point about IAD. So, would you start believing that the avtaar is actually the real you? It is upto to the user to realize their responsibilities and not let their avtaar become real.
Archana Narayan   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Aug 19, 2015 12:59am</span>
Displaying 21337 - 21360 of 43689 total records