Blogs
Math Magic. NEW VIDEO --> https://t.co/X8u8Jwhp6N
Jim Gates
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 07:02am</span>
|
Annotations:
and higher technology
Jim Gates
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 07:02am</span>
|
Yesterday @SybilV posted a comment via Twitter during a library orientation for her class:
An innocent enough of a gesture one could assume. What I took Sybil’s point to be, was that Britannica is not a good scholarly source, and that the library should be encouraging other/more appropriate research practices (like, you know using scholarly sources, and judging credibility and bias). But what also struck me about this was the odd moment when librarians are encouraging students to use the encyclopedia as a source. And, perhaps I read too much into this, but I think the librarians gesture comes as a correction to Wikipedia, i.e. the subtext here is "Don’t use Wikipedia use Britannica." This might be my bias, or my way of reading things, so fair enough I didn’t respond to Sybil’s tweet. But, apparently Britannica has a Twitter account, and the person who manages the account noticed Sybil’s tweet and decided to respond:
Shocked to see that Britannica was on Twitter I couldn’t resist and posted the following:
Well needless to say it was all downhill (or shits and giggles depending on your perspective) from there. I won’t recount the blow, by blow, mainly cause it gets really long, and the person who Tweets from @Britannica obviously feels passionate about defending Britannica, and at one point posted nine straight tweets defending the appropriateness of Britannica as a scholarly source.
A few notes might be worth making at this point: 1. I am not speaking for @SybilV here, these are my opinions, and I have a sense that my tone if not also my stance is more radical/ contentious than hers. 2. I have no idea if the account @Britannica is an official Britannica Twitter account. I looked at the Britannica page and couldn’t find it listed. So, the account might just be a Britannica fan, or an employee who unofficially Tweets from that account. I don’t know, but I think we can take the arguments that @Britannica makes as indicative of those who champion this encyclopedia and its format.
It seems to me that with all the tweets sent back and forth, with others in the Twitterverse adding to the discussion, the central issue was "What is the appropriate use/role for Britannica in relation to society and specifically academia?"
So here’s the thing: 1. It has none. 2. This is because of Wikipedia.
Don’t get me wrong I am not disparaging Britannica, not really. It had a role, and generally speaking it served it well, but:
Yes, Britannica is a pretty good secondary source. It has a lot of advantages as a secondary source. Articles are fairly thorough, contain citations, and are more or less accurate, but as a secondary source it doesn’t even come close to the value of something like Wikipedia. Thirty years ago, heck even ten years ago, Britannica was arguably the best secondary source around. If you wanted to get a quick overview of a specific subject Britannica was a good place to start, a good portal to gaining deep knowledge about a subject.
In a world of dead-tree based knowledge the central authority, hierarchically controlled way of organizing, was a good thing. When you only have so many pages, you can’t reprint frequently, and distribution is expensive, these are good decisions. But in a digital networked information structure these are not.
What you want from a secondary source is a good introduction to a concept, that is mostly reliable, up-to-date, entries for as many topics as possible, connections to where to go to learn more, and easy and ubiquitous (as possible) access. A secondary source is not an in depth analysis which upon reading one is suddenly an expert on said entry or topic, it’s not designed to be. It is just a good overview. No secondary source is going to be completely accurate, or engage in the level of detail and nuance which we want from students, or that is required to fully "know" about a subject.
This is why the Wikipedia banning by schools and professors has always struck me as a particularly stupid policy. The issue is not that Wikipedia is or is not reliable and thus should be banned in academic environments, rather the issue is that Wikipedia is a secondary source and thus should not be treated as a primary one. But, this also holds true for Britannica. Any syllabus which contains language about banning Wikipedia misses this point. Ban secondary sources from student work, not Wikipedia in particular as this confuses the issue. This doesn’t mean that students shouldn’t use secondary sources, indeed they should they are great ways to begin to learn about a subject. It just means they should not cite secondary sources, they should always look for primary ones, and that they should never take Wikipedia or Britannica as the final word on a subject. I don’t recall a single syllabus from my college days (pre-Wikipedia) that said "do not use Britannica as a source for your papers, doing so will result in failing the assignment." Seriously, professors explained to us what reference books were for, and how to correctly use them.
Several semesters ago I wrote a piece defending Wikipedia and arguing that it was irresponsible to not teach students about how to use Wikipedia. I won’t rehash those arguments here, but I will reference one objection made in the comments of this article, which I often hear when I talk about Wikipedia:
MY guess is that the author wouldn’t want his doctor to base his latest surgery on a Wikipedia article.
Of course not, don’t be stupid, I wouldn’t want my doctor to be educated by Wikipedia, but I wouldn’t want my doctor to be educated by Britannica either. The role of Wikipedia isn’t to train heart surgeons how to perform a bypass, nor is it the role of Britannica, that is not the function of these objects. To hold Wikipedia to this standard is more than a bit ridiculous. Wikipedia doesn’t strive to be an object that teaches doctors how to operate (although it seems that Britannica might be trying to claim this ground).
We could argue about the accuracy of Wikipedia, although studies show that it is as accurate as Britannica, or about the policy that "any one can edit," at least with Wikipedia I can view the editing history, or we could argue about the problems on Wikipedia, of which there are many (bland prose, serious debates between inclusionists and deletionist, its Western-English bias, an increasing bureaucratic control structure, among others). But what really isn’t arguable at this point is that as a broad overview of knowledge, a good place to start an inquiry, Wikipedia is a killer app.
When it comes to functioning as a secondary source, a reference guide, Wikipedia has substantial advantages over any prior encyclopedia model. In the same way that Britannica’s model of "get experts in a field to write specific articles" was a vast improvement over the prior model "get the smartest person to write the whole encyclopedia," Wikipedia is a substantial improvement over Britannica. (Sorry folks at Britannica, this is just the way it is. P.S. While you are at it you might want to sell your stock in 8-tracks, newspapers, and scriptoriums.) The breadth of knowledge, its ability to be linked to other knowledge, its cost (free), its up-to-dateness, and its preservation of editorial discussions (it records not only the article but the discussion which produced said article) makes it far more useful. And that doesn’t even begin to address things like how much easier Wikipedia is to use for mash-ups and data extraction, repurposing the information for other reference works.
To illustrate this point I make the following challenge:
I hereby challenge any employee of Britannica to a game of trivial pursuit. You can consult Britannica Online for any question, and I can consult Wikipedia. Want to take bets on who will win? (I’ll even let you have all 15 print editions as well). We could also play "Who Want’s to Be a Millionaire?" of "Jeopardy" if you want.
So, this is the bind that Britannica is caught in. It can market itself as a secondary source: we are a great reference tool. But if it does this, someone can easily point out that Wikipedia is a better secondary source, and free (in other words libraries can spend dwindling resources on other primary materials). Or, it can claim to be a great primary source, a role it simply can’t fulfill. It simply doesn’t have a place anymore, there are better services doing what it did.
Now seriously, can we end this debate already. Instead lets talk to students about how appropriately to use secondary sources, how to understand how encyclopedias function, how all encyclopedias are biased, all knowledge is discursive, and focus on teaching students how to judge credibility and accuracy instead of outsourcing it to people at Britannica.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:35am</span>
|
Scholastic recently chose to censor a book (or more accurately ask an author to alter her book-which is a form of censorship) because one of the characters, wait for it . . . has, gasp, "two mommies." The School Library Journal has the background story. Normally I would stop at passing this article around and encouraging people to act, signing petitions, sending complaints, etc. But, a reader sent me the link to Scholastic’s own blog On Our Minds. And, more importantly pointed out that somehow this blog makes a list of minds they "admire" (aka their blog roll). Since they "admire" my mind I thought I would give them a piece of it.
Dear Scholastic,
It was recently pointed out to me that this blog, Academhack, is contained in a list on your website under the heading, "Minds We Admire." Since you admire my mind I thought I would take the opportunity to share with you what this mind thinks, particularly in response to your recent "UPDATE on Luv Ya Bunches".
I think writing a 300 word blog post attempting to explain your position, while never once addressing the fact that you asked the author to "clean up the book" removing the reference to same-sex parents amounts to tacit admission that the company asked Lauren (the author) to alter her work. In other words tacit support of homophobia.
I think that defending yourself by pointing to other books you publish with gay and lesbian characters is a bit like saying "some of my best friends are gay."
I think if I worked for your company I would quit.
I think that publishing the book is far different from actively promoting it at your fairs. This is like a "don’t ask don’t tell policy" where you accept difference so long as it doesn’t become too inconvenient.
I think that you are making a business decision, not an ethical one, and I think you should just be honest about this.
I think that you are trying to avoid complaints by conservative narrow minded homophobic people at the expense of presenting and promoting diversity.
I think if I were a children’s author I wouldn’t let you sell my book until you reversed your policy.
I think that censoring diverse voices is a sure fire way to propagate intolerance. I think that the communities from which you most fear backlash are perhaps the ones who most need to see this book displayed.
I think bowing to financial pressure over doing what is right is the sure way to end up on the wrong side of history.
I think that you are a business which will make financial choices, but I also think that schools and parents can chose which businesses they want to support.
I think if I were a primary school teacher I would refuse to pass out your catalog to my students.
I think I will buy several copies of Luv Ya Bunches and give them out to kids I know. I can think of no better way to combat your homophobia than by encouraging kids to read books which tell diverse stories. (That and one of your competitors will get my money, sending you a financial message as well.)
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:34am</span>
|
Jut got back from MLA, much writing, blogging, and reflecting to follow, but in the meantime I seem to have overlooked mentioning that an article I wrote for Flow.TV was published (published is this the right word for it in the age of the internet?) last month. For those who are interested I make the case that New Media is Neither.
More Later.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:34am</span>
|
Two Things about the MLA conference I want to connect here:
1. Clearly one of the themes that has developed in the MLA post-mortem has been the rise of social media and the influence of technology at the conference. Both The Chronicle and Inside Higher Ed noticed the prominence of Twitter at the convention, or the apparent prominence of Twitter. It seemed that unlike last year where the majority of conversation about/on Twitter and the MLA was confined to to one session, this year, although noticeably less than other conferences, Social Media was clearly playing a role.
What is more, as The Chronicle noticed this seemed to be a part of a larger trend in the Digital Humanities. Ultimately I agree with Mark Sample (@samplereality), who posted an analysis of the MLA Tweets and Matt Kirschenbaum(@mkirschenbaum) who argued via Twitter that this meme/theme was some what overstated. As Matt observed, the MLA has a history of at least being marginally receptive to "technology and literacy" panels even if they have not been placed in the center of the discourse. (Rosemeary Feal deserves mad props for her outreach here. Tweeters aren’t always the most reverent or polite bunch, self included, but I am nothing compared to @mladeconvention.) Given that Matt won the MLA book award for best first book, it is hard to ignore the fact that digital humanities is becoming more prominent and more mainstream, if still marginal. But I also think there is somewhat of an echo chamber effect here. That is, of course those who write online and are engaged with technology are more likely to notice that technology is being talked about. I think if we polled all of the attendees at the MLA a vast majority of them would have no idea that a conversation (at times academic, at times not) was taking place via Twitter. Indeed I would venture to guess that a majority could not really describe to you what Twitter is/was.
2. One of the other "much talked about items" at MLA was Brian Croxall’s (@briancroxall’s) paper, or non paper titled, "The Absent Presence: Today’s Faculty." I say non-paper because Brian, who is currently on the job market and an adjunct faculty, didn’t attend the MLA, instead he published his paper to his own website. (I am told the paper was also read in absentia.) I won’t recap the whole thing here, you should just go read it. But two things stand out in the article: 1."After all, I’m not a tenure-track faculty member, and the truth of the matter is that I simply cannot afford to come to this year’s MLA." 2. "And yes, that means I do qualify for food stamps while working a full-time job as a professor!"
For several reasons Brian’s paper hit a nerve. Indeed The Chronicle picked up the story, a piece which for a few days was listed as the most popular story on The Chronicle’s website. His paper became, arguably, the most talked about paper of the convention.
In part Brian’s story (how the paper became popular, not the content-or at least not yet, more on that in a minute) is in part a story of the rise of social media, and its influence. And this is where I think the real story in the Digital Humanities is, not the rise of the Digital Humanities, but rather the rise or non-rise of social media as a means of knowledge creation and distribution, and the fact that the rise has changed little. Digital Humanities if it is rising is rising as "Humanities 2.0″ allowed in because it is non-threatening.
So if you imagined asking all of the MLA attendees, not just the social media enabled ones, what papers/talks/panels were influential my guess is that Brian’s might not make the list, or if it did it wouldn’t top the list. That is because most of the "chatter" about the paper was taking place online, not in the space of the MLA.
Let’s be honest, at any given session you are lucky if you get over 50 people, assuming the panel at which the paper was read was well attended maybe 100 people actually heard the paper given. But, the real influence of Brian’s paper can’t be measured this way. The real influence should be measured by how many people read his paper, who didn’t attend the MLA. According to Brian, views to his blog jumped 200-300% in the two days following his post; even being conservative one could guess that over 2000 people performed more than a cursory glance at his paper (the numbers here are fuzzy and hard to track but I certainly think this is in the neighborhood). And Brian tells me that in total since the convention he is probably close to 5,000 views. 5000 people, that is half the size of the convention.
And, so if you asked all academics across the US who were following the MLA (reading The Chronicle, following academic websites and blogs) what the most influential story out of MLA was I think Brian’s would have topped the list, easily. Most academics would perform serious acts of defilement to get a readership in the thousands and Brian got it overnight.
Or, not really. . .Brian built that readership over the last three years.
As Amanda French (@amandafrench) argues, what social media affords us is the opportunity to amplify scholarly communication (actually if your read only one thing today on social media and academia today, read this). As she points out in her analysis (interestingly enough Amanda was not at MLA but still tweeting (conversing) about the MLA during the conference) only 3% of the people at MLA were tweeting about it. Compare that to other conferences, even other academic ones, and this looks rather pathetic. Clearly MLAers have a long way to go in coming to terms with social media as a place for scholarly conversation.
But, what made Brian’s paper so influential/successful is that Brian had already spent a great deal of time building network capital. He was one of the first people I followed on Twitter, was one of the panelists at last years MLA-Twitter panel. He teaches with technology. I know several professors borrow/steal his assignments. (I personally looked at his class wiki when designing my own.) Besides having a substantial traditional CV, Brian has a lot of "street cred" in the digital humanities/social networking/academia world. More than a lot of folks, deservedly so. It isn’t that he just "plays" with all this social media, he actually contributes to the community of scholars who are using it, in ways which are recognized as meaningful and important.
In this regard I couldn’t disagree with BitchPhD more (someone with whom I often agree) in her entry into the MLA, social media, Brian’s paper nexus of forces. Bitch claims that, "Professor Croxall is, if I may, a virtual nobody." Totally not true. Unlike Bitch he is not anonymous, or even pseudo-anonymous, his online identity and "real world identity" are the same. He is far from a virtual nobody. Indeed I would say he is one of the more prominent voices on matters digital and academia. He is clearly a "virtual somebody," and he has made himself a "virtual somebody" by being an active, productive, important, member of the "virtual academic community." If he is anything he is a "real nobody," but a "virtual somebody." In the digital world network capital is the real "coin of the realm," and Brian has a good bit of it, which when mustered and amplified through the network capital of others (@kfitz, @dancohen, @amandafrench, @mkgold, @chutry, @academicdave —all of us tweeted about Brian’s piece) brings him more audience members than he could ever really hoped to get in one room at the MLA.
And so Brian isn’t a virtual nobody, he isn’t a "potential somebody" he is a scholar of the the digital humanities, one that ought to be recognized. But here is the disconnect, Brian has a lot of "coin" in the realm of network capital, but this hasn’t yielded any "coin" in the realm of bricks and mortar institutions. If we were really seeing the rise of the digital humanities someone like Brian wouldn’t be without a job, and the fact that he published his paper online wouldn’t be such an oddity, it would be standard practice. Instead Brian’s move seems all "meta- and performative and shit" when in fact it is what scholars should be doing.
And so in the "I refute it thus" model of argumentation I offer up two observations: 1. The fact that Brian’s making public of his paper was an oddity worth noticing means that we are far away from the rise of the digital humanities. 2. The fact that a prominent digital scholar like Brian doesn’t even get one interview at the MLA means more than the economy is bad, that tenure track jobs are not being offered, but rather that Universities are still valuing the wrong stuff. They are looking for "real somebodies" instead of "virtual somebodies." Something which the digital humanities has the potential of changing (although I remain skeptical).
In the panel at which I presented, an audience member noting the "meme" about the rise of the digital humanities asked if all of this "stuff" about digital humanities just reflected our fascination with gadgets, or how we balance our technology with humanities, how does the digital affect the humanities in a non-gadget way? (I paraphrase but that’s the thrust of the question). After a few of the other panelists answered, I suggested that the question was bad (this is often a rhetorical trope I employ). I said instead of thinking of the word digital as an adjective which modifies the humanities, the humanities 2.0 model, I am more interested in how the digital effects not how we do the humanities, but rather how the digital can fundamentally change what it means to do humanities, how the digital might change the very concept of "the humanities." I don’t want a digital facelift for the humanities, I want the digital to completely change what it means to be a humanities scholar. When this happens then I’ll start arguing that the digital humanities have arrived. Really I couldn’t care less about text visualizations or neat programs which analyze the occurrences of the word "house" in Emily Dickinson’s poetry. If that is your scholarship fine, but it strikes me that that is just doing the same thing with new tools. Give me the "virtual somebodies" who are engaging in a new type of public intellectualism any day. Better yet, if you are a University and want to remain relevant in the next moment, give these people a job.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:33am</span>
|
"For [the theoreticians of photography] undertook nothing less than to legitimize the photographer before the very tribunal he was in the process of overturning." -Benjamin, Little History of Photography
I want to explicate some of the issues I raised in the last post, address some of the comments, walk back my position on at least one point (yes you are all right the word "bad" was not a fair characterization), and dig in on a few others.To keep these posts stylistically similar let me again start with two observations.
1. One of the essays I most enjoy teaching in my media studies classes is Benjamin’s The Work of Art in the Age of Mechanical Reproduction. When teaching this essay I often begin the class by saying Benjamin understood why Ebert was wrong. That is Ebert, rather famously claimed that while video games might demonstrate a high level of craft, they will never rise to the level of art. Of course what Benjamin argued in The Work of Art, at the time in relation to photography, was that the question should not be "Is Photography Art?" but rather the more important question: "What does having photography do to our concept of art?" (By extension the question of video games should be what does having video games do to our concept of art.)
This is similar to how I think about the concept of digital humanities. I think we should not be asking, can the humanities be digital, or how does the digital allow or not allow us to do humanities, but rather, what does having the digital do to our idea of the humanities (and by extension what it means to be human). Anything short of this strikes me as less than interesting, but more importantly a missed opportunity.
2. Okay, I can tell I am really going to get in trouble for this one but . . .
The following is not originally my observation, I wish I could take credit for it as I generally agree and think it is really astute, but it’s not mine. (But I will let the original source remain anonymous as it was an "off the record conversation," but if said person wants to claim it, I will note credit here.)
Generally speaking (painting really broad but accurate brush strokes here) Digital Historians, and Digital Literary Scholars have had significantly different approaches to incorporating "the digital" into their respective scholarship. Digital Historians have leveraged the digital to expand and engage a wider public in the work of history. As examples of this think of Omeka, or leveraging social media to engage in crowd sourced projects. That is, Digital Historians have often begun by asking "how does the digital allow us to reach a larger/public audience?" Now this could be because many of the folks working in Digital History come from a public history background . . . But in the case of literary studies the "digital" projects have not, as much, changed the scope of the audience. So that if you look at digital literary projects they often look remarkably similar to projects in the pre-digital era, just ones which have been put on steroids and run thru a computational process. Seems to me that the Digital Historian model is a better one.
Okay so onto the post. . .
I can’t help but notice that most of the talk, or at least critique, in the comments centers around the last paragraph, largely ignoring the analysis which led me to that paragraph. (To be fair I sort of invite this, saving my central and controversial claims for that section, but still . . .) That is, the early part of the post has as its supposition that "Universities are still valuing the wrong stuff," and by Universities I mostly arguing about humanities scholars, but that’s only because the context was the MLA. When I look at what type of digital scholarship in the humanities is being recognized and valued by the institutions within which we operate it seems that that scholarship is mostly conservative, does little to question, upset, or threaten the dominant paradigms. And, that what I see to be as truly important work has yet to receive recognition. The fact that someone like Brian can be without a job and largely a "real nobody" while he is such a significant "virtual somebody" is just one example of this.
In his comment on the original post Tim Lepczyk suggests that a large part of the problem here is in defining what I, or anyone, means by the digital humanities, or humanities 2.0. I think this is spot on, and this is probably one of the most slippery parts of my argument, one I haven’t entirely worked out. As he points out there has been a certain amount of baggage from prior text analysis that is ported over in the upgrade to digital humanities. I definitely see humanities scholars as collaborating with computer scholars, IT folks, and people from a range of places within the academy and outside the academy. (Indeed one of my favorite presentations at the MLA addressed one particularly thorny aspect of this issue, @nowviskie’stake on intellectual property and labor in the age of collaboration.) But I think if what the digital does is just take the old disciplines and make them digital, leaving disciplinarity and the silo structure of the University in tact, it will have failed. I want to see the digital transform not just the content or practice of the disciplines, but the very idea of disciplinarity.
But, it is not entirely true as Brian Breman argues that I am advocating a "this changes everything," approach to the digital humanities. In fact my major fear, the thing that keeps me up at night, is the idea that "this changes nothing." Indeed that was the impetuous for the original post, despite the digital, nothing changes. It seems to me that the digital affords us (both as academics and as a wider members of a society) to do something really different, to re-organize many of the founding assumptions we have about how to organize knowledge, how to organize people, and even the nature of what it means to be human. But, I see us not necessarily taking advantage of this opportunity. In fact I see this as a fading opportunity, as our culture makes the "change over" from one intellectual substructure (dead tree) to another (digital network) it seems that we are porting over a host of prejudices about knowledge production and dissemination that are worth rethinking. (As just one example of this I think about intellectual property and knowledge ownership.) So, I would love if "this changes everything," but unfortunately I think (as my original post claimed) that this has changed little, especially within the walls of academia. This is not to suggest that there are not some significant revolutions/projects taking place both within and outside of academia, but that a lot of what is being done/counting as digital scholarship does little to question the founding principles of academic knowledge production, especially within the field of "literary studies" (principles which we can at this moment, perhaps, but for a very short time re-negotiate).
On the most radical I’ll raise the question this way: The rate at which some of the digital scholarship has been so smoothly/effortlessly incorporated into the walls of the academia should perhaps give us pause to question whether or not it actually signals any change at all. Again to paint broad brushstrokes, but ones which I think are relatively accurate, scholarship tends to fall into two categories: 1. That which does little to call into question the walls of the ivory tower, or what is worse strengthens those walls, a digital humanism which would build an ivory tower of bricks and mortar and supercomputers crunching large amounts of textual data producing more and more textual analysis that seems even more and more removed from the public which the academy says it serves re-inscribing and re-enforcing a very conservative form of humanities scholarship. 2. A digital humanism which takes down those walls and claims a new space for scholarship and public intellectualism. Now while these two positions are not as mutually exclusive as I am painting them here I am more than willing to sacrifice the first for the sake of the later.
In the longest comment on the last post, @mkirschenbaum, suggests that when we think about the internet we need to think not about the Derrida of The Postcard or Of Grammatology, but rather the Derrida of Given Time. This is perhaps the most succinct phrasing I have heard of the problem. We spend too much time thinking about the structure of the link or data and not enough time thinking about the social relations and ethical questions opened up by this space.
And in this regard I agree with in part @sramsay’s comment that "new tools can facilitate a new type of public intellectualism." The printing press was not just a faster version of the scriptorium, it was the "gadgets of the early modern period and the networks of communication in which they flourished" that changed the intellectual and wider cultural landscape. The printing press was not a mere tool by any means. But, it was precisely at the level beyond the printing press as gadget that I want to look, and to which I think we need to focus our efforts. On one level the printing press was just a gadget and the real, the important change, came at the level of the social negotiation about how that gadget would be deployed. Authorship, intellectual property, authority, piracy, etc. were all social/legal/cultural negotiations that occurred and were not decided at the level of the gadget, even if the gadget did speed up the rate of connectivity. If academic scholarship, just to take one example, says "what can I author now on the web," without first calling into question the notion of "authorship" and recognizing the degree to which it might be heterogenous to the way knowledge can be organized on the web we will have missed a golden opportunity.
I think I should have been perhaps clearer, or not so glib in my paraphrasing of the question from my panel. I think to say that it was a "bad" question was wrong. What I should have said was that I think to answer the question straight up is not the most productive way to look at the problem. Instead by answering the question backwards, saying what if we thought about the "digital" as not merely an adjective (gadget to be applied to the humanities) but something much more, what does having the digital do to our conception of the humanities, seems to me the place we should place our focus.
And so this is where I am really going to dig in. @tanyaclement, correctly so, calls my analysis out, saying that like the MLA I am perhaps focusing too much on social media, "Clearly, there has been a lot of focus on "Digital Humanities" this year because of the rise of twitter and, as such, DH has now been associated with social media almost exclusively. This is unfortunate." Where I am going to disagree with this is at the level of "unfortunate." I think this is a fortunate thing (if only it were the case). The more digital humanities associates itself with social media the better off it will be. Not because social media is the only way to do digital scholarship, but because I think social media is the only way to do scholarship period. Yes it is true that there are hosts of scholars having scholarly discussions who are not on Twitter, but you know what, they better be, or they risk being made irrelevant. No this doesn’t mean that every scholar has to have a Twitter account, but it probably wouldn’t hurt, but it does mean that every scholar better be having their discussions in public on the web in these digital spaces for all to participate in.
I realize that this stance displays a certain amount of irreverence to the very people on whose shoulders which I stand in order to make this argument, but on the same time it displays a hyper-fidelity to their work, thinking about how it can be carried into this new digital substructure, used to shape this (perhaps) new way or organizing knowledge.
Yesterday this argument took a different sort of turn when Ian Bogost published The Turtlenecked Hairshirt: Fetid and Fragrant Futures for the Humanities. In part Bogost was weighing in on the question of Digital Humanities and its arrival, non-arrival, but was actually, it seems to me, making a much broader critique. Regardless, as he observes in the comments on the post, much of the discussion centers around a conflict between digital humanities and new media. Along these lines Matt asked if this is not just a debate over semantics, and perhaps less generously, a territorial pissing match. Throwing around the term "digital humanities" as an empty signifier, backlash against the digital humanities.
Let me be clear, I have no desire to engage in an academic territorialization argument. Honestly I couldn’t care less, having left an English department I am quite happy to not have to engage in those discussions. My position was a much larger one, addressing the question of whether or not "digital humanities" has arrived, and in a connected manner what this means for the future of the humanities. It appeared to me that much of the discussion at MLA was about the arrival of the "digital humanities" and in a related theme the extent to which this can serve as a "cure" (as Ian puts it) for what ails the humanities.
So let me put it a different way, maybe the digital humanities has arrived, maybe it is becoming central and important in the way that humanities scholars do their work, but the digital humanities that has arrived (the slow work that @tanyaclement mentions) is the kind of arrival that changes nothing, a non-event. The only type of digital humanities that is allowed to arrive it the type that leaves the work of humanities scholars unchanged. Seriously, don’t tell me your project on using computers to "tag up Milton" is the new bold cutting edge future of humanities, or if it is the future of the humanities it is a future in which the humanities becomes increasingly irrelevant and faculty continue to complain at boorish parties how society marginalizes them, all the while reveling in said marginalization, wearing it as a badge of honor which purportedly proves their superiority on all matters cultural.
As Ian observes, "It’s not "the digital" that marks the future of the humanities, it’s what things digital point to: a great outdoors. A real world. A world of humans, things, and ideas." That is what I was after in my original post, the idea that the digital that I am hoping for, hoping will challenge and change scholarship hasn’t arrived yet, for all the self congratulation about the rise of the digital, little if anything has changed. Humanists are still largely irrelevant in the broader culture discussions, and it seems to me purposely chose to remain so.(Actually I am not certain the degree to which this is really about "literary" humanists, as it seems this issue plays out differently in history. But that might just be the perspective of an outsider.)
And this is the brilliance of Brian’s paper (content not withstanding) he made his material more relevant than all the other papers that weren’t published, he engaged the outside (even if it was a paper that was a lot of inside baseball on the workings of the academy) because he opened his analysis and thinking to a wider audience (and as @amandafrench and @bitchphd remark did it with a real-time spin that enhanced at both the level of content and delivery). Again The real influence should be measured by how many people read his paper, who didn’t attend the MLA. Or maybe, the real influence of his paper should be measured by how many non-academics read his paper. Scholars need to be online or be irrelevant, because our future depends upon it, but more importantly the future of how knowledge production dissemination takes place in the broader culture will be determined by it.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:32am</span>
|
The following is a guest post from UT-Dallas graduate student, Barbara Vance (@brvance). This past semester Barbara taught an atypical rhetoric and composition course. Barbara teaches Rhetoric 1302, the standard introductory college writing course. She was given a course with a group of students who she was told, were struggling with writing and needed, "more structure." As a response Barbara did the smart thing, and actually gave the students more freedom and control over their education. I’ll quickly summarize, and then get out of the way and let Barbara tell the story. Essentially, Barbara turned the class into a documentary production class where the students spent the semester producing a film, working collaboratively on one project. Where is the writing you ask? Well read on, but Barbara had them write about their experiences the whole time, giving them a reason and context to write. The results are pretty amazing. The post is a bit on the long side, but worth the read as Barbara covers not only the "what" but the "why." Also check out the two embedded video the one below is the video from the students, and at the end is an interview with Barbara. This is a bold, risky approach, especially given Barbara’s status as a graduate student, not tenured faculty, but I think if college rhetoric and indeed college education is to remain relevant over the coming years this is the type of experimentation and adaptation that will be necessary.
Click To Play
The Internet has fundamentally changed not only the means through which we communicate, but also how we communicate and how we think. It has, in turn, altered what others expect from our writing, what employers look for in applicants, and how we conceive of work that used to be private. One need only look at the blog explosion to see how the ability to disseminate our thoughts cheaply and quickly, and to develop a dialogue with others empowered thousands to believe their voice was/is worth sharing.
Teachers cannot ignore this communication shift. A Kindle is more than a paperless book: it changes how we read, how we define reading, and how we perceive intellectual ownership. As society continues down a path toward ever-increasing mobile communication, our conceptions of how we persuade will also change. I think few Rhetoric instructors would argue with the idea that students should be able to not only consume information, something they’ve been doing their entire lives, but also to produce it. But as it stands now, most rhetoric courses focus strictly on writing, and they limit assignments to the classroom environment - practices that devalue other rhetorical mediums, and the purpose of rhetoric itself. It is with this spirit in mind that I designed my special topics Fall 2009 freshman rhetoric course at the University of Texas at Dallas. I wanted to transform the traditional rhetoric class with its standard textbook into a more relevant, new-media oriented course that focused not only on writing and speaking, but one that also looked at rhetoric in film, photography and music.
To that end, I designed the course to include a live WordPress blog on which students could speak to each other and anyone else in the world who cared to listen. A website containing copies of their larger papers coincided with the blog. This made the assignments more communal in nature and reinforced that writing is meant to be shared. In a more traditional classroom environment, students write only for the teacher, an approach that makes assignments seem less relevant to the students and devalues the very idea of rhetoric. Requiring students to blog, contact people outside their classroom, and post writing on the Internet teaches them to engage with the community, gives their writing more significance, and supports rhetoric - a term that, by definition, implies community.
While this public exposure to their work can be intimidating for some students, it forces them to take more accountability for their words while teaching them the power of communication. If they embrace it, students can develop a sense of freedom and power that resides in someone who feels comfortable with both the tools of communication and also the arenas that currently dominate the conversation. Right now, a majority of the conversations are increasingly happening online. Students must know how to navigate these waters. It is a direction more and more university rhetoric departments are going toward, including Ohio State University, which has some excellent examples of class blogs.
A strictly digital approach is not for everyone. I will always prefer a paper book, believe memorizing grammar rules is essential, and don’t think everyone needs a blog. Nonetheless, these are issues students should be aware of. Creating work in a vacuum delegitimizes it. When the goal of your course is to teach students to persuade, and you don’t include what is now the most influential tool for disseminating your argument, you are crippling your students.
Writing and reading online is different than performing those same tasks on paper. We communicate differently on the Internet, and as more and more people read from their phones and portable e-readers, our understanding of communication will change further still. As technology shifts, so does our means of persuasion; if students do not explore this, they will find their skills quickly out of date. Rhetoric is more than just learning a standard structure for an argument. Students should be asking themselves: "How does what we write and what we think change when we know that in ten minutes we can create a blog and broadcast to the world? How does this change how we see and portray ourselves?" These are the deeper rhetorical questions students need to grapple with. It is this focus that will make them stronger readers, writers, and citizens.
The second media-based aspect of the course was centering the writing assignments around a film that the students would produce. My goal was that this would provide continuity between assignments, while reinforcing one of the fundamental ideas underlying this class: rhetoric is found in a variety of media, not just writing. Many rhetoric programs devote time to "visual rhetoric," but it is often cursory at best and culminates in a short essay examining a film or piece of art. While I do not object to this method, I was always bothered that writing was still given precedent over the image. We tell students that pictures are a viable means of persuasion, and then we as them to write about it. This hardly reinforces the message. So I thought: "Why not have the students work with the mediums they study, including film?"
I "hired" each student for a position in the "company" based on his skills and interests with the idea that this would not only hold their interest, but also be quite germane to their course of study. Everyone had to apply for their job, writing a cover letter and resume, and having a personal interview with me. Students were never entirely on their own, as the positions were part of large groups: pre-production, post-production, marketing, and web design.
Throughout the semester we discussed the various rhetorical aspects that comprise a film - including text, images, music, and sound effects - focusing on how and why creators made the decisions they did. Always, the emphasis was on these crafts as rhetorical devices. The end result was a website and corresponding film, created by the students and comprised of their work throughout the semester. Overall, I have found it a fun, effective approach.
An added benefit of the film was that it captured the students’ interest, as did broadcasting their work on their website, www.rvuentertainment.com. They became so invested in the film that the writing pertaining to it took on new meaning. The first essay required them to identify an issue in their local community and write about it. From these, the students voted on which would be made into a film. The second major writing assignment was a visual essay in which the students each described how they would make the film, supporting their paper with images they found online or took themselves. In addition to these, smaller assignments were given to each student based on his role in the company, including reports, marketing letters, short essays on artists who inspired them, and storyboards. All students were also required to blog weekly. The students really took to the project and, barring the procrastination that is a given for many college freshman, they handled it well. Weekly student-run meetings in class kept everyone on the same page and let me know where things stood. There were also individual meetings in which I worked one-on-one or in small groups to help them with their respective roles.
I admit, I had my doubts. Coming from a traditional writing background, and considering the departments goals, I felt the focus of the class should remain on writing aptitude, and the one constant question rolling around my head all semester was: "Are you doing the students an injustice? Are you taking time away from writing skills to focus on film, sound, and these "alternate" methods of persuasion?" I think my fears were reasonable, but ultimately the class worked out well. Because so many rhetorical devices remain constant across mediums, teaching students how pacing working in screen cuts or music only reinforces how it could be employed in their writing.
Overall, I think the class was a success. It taught the students to work with a variety of mediums and to always consider their work as something to share. It is this final point that the entire course hinged on: community. The blog, the group film - everything the students - did was about engaging the world, establishing a presence, and utilizing the tools that the rest of the world is operating with, rather than limiting them to traditional print-based technology.
Here is an interview about the project with Barbara.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:30am</span>
|
Last week I sent Michelle Nickerson, a colleague of mine here at UT-Dallas , a link to Dan Brown’s "Open Letter to Educators." Michelle like me, is concerned about the future of the University, and as someone whose opinion I respect, I wanted to see her response. After watching it we swapped emails back and forth about Dan’s video, at one point Michelle asked if I was going to write about it for this blog, to which I responded "how about you write about it and I’ll post it." So, the following is Michelle’s thoughts on Dan Brown’s piece. I don’t entirely agree, but this is a good jumping off point. Let the conversation begin.
University administrators and faculty should pay attention to the message of Dan Brown’s "Open Letter to Educators." Students need to ask themselves, as Brown does: "What does it mean to receive an education?" Brown’s most important observation is how the university, as an institution, is failing to change in ways that make it relevant to what he describes as "a very real revolution." He notes that technologies popular in higher education today—like email, on-line databases, and blackboard—represent minor adjustments that fall woefully behind the curve of the real sea changes threatening to undo "the University" as an institution of learning. Brown, moreover, correctly identities how shifting class relations challenge the current structures of higher education. I agree that the internet has, in many ways, proven itself a democratizing force in our society and many others. Brown’s limited insight, however- contained as it is in his box of "information"-prevents him from seeing numerous other layers to this problem. I will talk about one.
The university, as a concept, could very well disappear just like Brown predicts…for many Americans, but not for all.
As institutions of higher learning seek ways to economize by eliminating and devaluing the spaces of learning that have been so central to "the University," they are coming to resemble exactly what Dan Brown sees in them—exchange sites of information, marketplaces easily replaced by much cheaper flows of information accessed on the internet. As they pack more students into lecture halls and fill the rosters of on-line classrooms, universities save billions of dollars in the short run, but diminish the value of their degrees. Classrooms and other spaces in the university lose their meaning in this race to the bottom. The competition for more bodies per professor, however, does not threat the university as a concept. This is where Dan Brown’s class analysis could use some help. The "State University"—specifically, the notion of affordable education is eroding. Financial and intellectual elites (rich people and academic-types) tend to be suspicious of each other, but one thing they seem to agree on is what the space of the University represents, and they will not stop paying for it…they will continue to pay hundreds of thousands of dollars to send their children to ivy league universities and small private liberal arts colleges. Princes and sheiks in foreign countries will continue packing their children off to the United States for higher education. These spaces, since they come at a very high price, are rarified worlds that diverge ever more from that of state universities. Administrators of these universities know that parents aren’t paying to send their children to these expensive schools for "information." They are sending their children to become the producers, manipulators, and interpreters of information. When university classrooms, libraries, courtyards, and student commons are designed and utilized to their greatest effectiveness, they become spaces where students learn not for the sake of absorption (passively), but for the sake of generating new knowledge, developing new conceptual models, discovering new worlds of meaning not introduced by their professors. The professor to student ratio is critical in this respect, because the professor-as-critic-and-listener is just as important, if not more important, than the professor as instructor. I therefore recommend that viewers heed Dan Brown’s "Open Letter to Educators," but think more carefully about what is disappearing with the university.
And for what it’s worth here is the video that sparked this conversation.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:29am</span>
|
So, I have been borrowing an iPad for the last couple of weeks. I realize given my critique of the device that it might seem a bit bizarre for me to be using one. But, I consider it research, a way to have an informed position, and since this is really one of our lab computers, I didn’t have to purchase one. I have been trying to use it for everything I need a computer to do, forcing myself to use it over my laptop. What follows is my now informed researched critique of the iPad.
My initial thought: I would pay $1000 for one of these tomorrow, but only if they unlocked the damn thing. This (I am typing on it now, more on this later) is perhaps the most frustrating computer experience I have ever had. Frustrating not because the iPad is difficult to use, it is anything but. Rather, it is frustrating because it is such an artificially unnecessarily crippled device. Or as I have said to those who have seen me carrying one around, "It’s like being given a Ferrari, only to discover that is has been equipped with a VW bug engine." The iPad looks nice, and shows what is possible, but only shows, never really delivers. Like I posited earlier this is an appliance not a computer, but if this was open, operating a full OS . . .
In this respect I think David Pogue’s schizophrenic review, one I sort of initially thought was a little cheeky, to clever by half, is pretty much dead on. If you buy one wanting a computer you will be disappointed, but if you buy one wanting a device for consuming all your digital content, it is well worth the price tag. Consider it is a video game platform, an ebook reader, and a way to surf the web. Quite a bargain in some respect. (But, and I stress this is strictly a consumption based device right now, you really have to fight it to use it to create and compose.)
On being an ebook reader:
Initially I thought that the iPad with it’s backlit screen could not compete with the Kindle or Sony eReader and eink, but after using the iPad I think the difference is not all that large. I have read for several hours at a stretch on the iPad and it doesn’t produce the eye strain I am used to associating with screen reading (Instapaper was one of my favorite uses of the iPad). To be sure, eink is still better, but the difference is nowhere near as large as I expected. Add to that the much easier (theoretically) ability to annotate your reading, and I could for see a future where I carry a slate style computer around to do most of my reading, especially journal articles and student papers. Furthermore the ability to do creative things, think beyond the book, embed video, dynamically update, nonlinear presentation, makes it promosing. I downloaded one "instructional app" a Stastics program that is textbook, plus quizzes etc, and it definitely points to a future for class content distribution that is much better than the current model. Plus I could carry around all my student papers, syllabi, important documents in one small form object. I do this already on the iPhone via Dropbox (minus the student paper part) but having it on a larger screen would make them far more useable. With an iPad I could truly go paperless.
Interface:
This is where the iPad really shines. Multi-touch screen interface changes the way you interact with a computer. Sitting down at a computer with a mouse and a keyboard just seems primitive now. The web surfing experience is so vastly superior. It’s honestly difficult to describe, the zoom in zoom out, slide objects around tactile nature of viewing. The iPad begins to change not only he way you interact with he web, but what can be done in terms of design and presentation. The best way to describe this is think Minority Report (note to Apple Minority Report serves as a proof of prior art so don’t be assholes and try and patent all of his). Very few applications have taken advantage of this yet, but the ones that harness the power of multi-touch really are a different sort of experience. I have been using iThought for mindmapping lately and there is a huge difference between clicking on a branch and moving it (as in with Nova Mind or other desktop based applications) and actually grabbing/touching the branch and moving it to where you want. The future is in touch screen interfaces, and I can’t wait for more of them.
Keyboard
The keyboard is not bad, I can use it for most of my typing. I am still slower than on a laptop with a full keyboard, but getting better, and I am sure I could retrain myself given another month or so. I also think that a case which would prop it up a bit or using the external keyboard could help. Certainly the keyboard would not limit me from using this as my primary computer, especially if I kept a full size keyboard at work for long composition, but I did write this whole blog post on the iPad.
Battery life:
Battery life is wicked good. I can easily go a whole day without charging it, more like two days.
Data:
This is where the iPad really sucks. There is no desktop, no place to store all of your data. For example if you want to build a Keynote presentation (the Keynote app is horribly crippled by he way, many of the features I am used to are not there) this can be incredibly frustrating. So you are in Keynote and you want a picture for your slide. You have to exit Keynote, go over to Safari open it up find the photo you want, copy it (if you want more than one you have to save them to iPhoto, if it is jut one you can save it to the clipboard), close Safari, go to Keynote and import the picture from the clipboard or iPhoto. Now say you need to give credit for the photo, you have to close Keynote again open back up Safari copy the URL, close Safari, open Keynote back up and then paste the URL into your credits slide. Seriously frustrating. I know the next release of the OS promises to allow multi-tasking, but the real issue here is not having a desktop to which you can save all the images, video, text, etc, you want. Or an open design platform so somebody could design me a clipboard with a 50 item cache. Applications for the most part can’t talk to each other and can’t pass data back and forth. So you have to develop all of these work arounds to have access to files. Right now the best way i think is thru Dropbox, but your Keynote presentation can’t save to Dropbox it can only save locally. So, you have to email it to yourself, and then from your home computer upload it to Dropbox. See, ridiculous, frustrating.
Locked Out:
This above is really a problem because of the way the iPad is locked down, you can only have apps which Apple wants you to have (can we talk about the fact that Apple denied a cartoonist application because it might be offensive, do we really want one company building that kind of media influence). I get what Steven Johnson is saying, that the device can be seen as generative, that the app store provides a certain amount of stability and funding guarantee for developers. So that what we have seen is an incredible explosion of iPhone apps, which is likely to be reproduced on the iPad. The problem with Johnson’s argument is that an open system is not mutually exclusive with an app store. Apple could provide an app store for the iPad, one with safe approved apps, and still allow others to install apps they didn’t get from the app store. This is how the iTunes music works. You can by songs from Apple, from Amazon, or upload your own, all of which iTunes can handle. Apple as large media conglomerate, hardware and software distributor scares me. How many people would leave their Apple’s behind if Jobs went to a App Store model for laptops and desktops? Many of my favorite Mac apps are ones that probably would not have gotten approved.
What developers would build for the iPad will no doubt be amazing, and this for sometime will probably continue to drive popularity, but also developers might start to balk at Apples tight control. I really want to see what developers could do if they had root access to this thing, my guess it would be pretty f’in amazing.
What’s Next:
I wont be buying one. I am going to wait. Having said that, I think if I was a developer or teaching web design directly I would. Why? Because it really changes the way you can compute and having a device that provokes this type of thinking is useful, a device that points to the future. But I still stand by the fact that I wouldn’t want these for my students as their computing devices. I would hate to see what type of student would develop if this were their only or primary means of computing. Instead I am holding out hope that the the competitors will will quickly get an open one to market. As for me I am going to go learn android so when a slate running android gets to the market Ill be ready to use it as my primary device.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:28am</span>
|
Yesterday, Dan Cohen’s tweet about the iPad and censorship, got me thinking about a drawback to the iPad for education argument.
What Dan made me wonder/realize is that by using iPads for educational purposes schools, both higher ed and secondary/primary ed, would be opening themselves up to censorship by Apple. In other words as I tweeted this morning:
Consider, that Apple’s track record here is not all that great. The way currently the App Store is administered, applications have to receive approval from Apple to be listed. Now supposedly this was initially done for quality assurance purposes (to make sure apps won’t crash your device) and in limited cases to insure that apps don’t duplicate existing core apps (listening to music, email) or interfere with AT&Ts money interest. But as the app store developed Apple extended their approval process into the role of censorship. From Apple’s Program License Agreement:
"Applications may be rejected if they contain content or materials of any kind (text, graphics, images, photographs, sounds, etc.) that in Apple’s reasonable judgement may be found objectionable, for example, materials that may be considered obscene, pornographic, or defamatory."
So, Apple might block anything that in their reasonable judgement they think is "obscene, pornographic, or defamatory." This as far as I am concerned is a dangerous situation, Apple as moral censor. Now certainly it is within their legal rights to do so, but the question is whether or not it is a good idea for us to enter this contract (and by us I mean both users and developers).
Most famously this restriction affected developers of "pornographic content" with Wobble being one of the more hyped, removed, reinstated apps. This also means that the range of iPhone sex apps must have stick figures rather than more illustrative pictures. So, say for instance you are teaching a course on human sexuality, or a sex education course, is Apple going to restrict what you can and can’t do with the iPad content wise?
Okay you might be thinking this is a liminal case, teaching sex in schools is always a touchy subject and Apple will be necessarily treading on shaky ground here. I think most people probably feel no threat from Apple as long as they limit their censorship to "pornographic content," but as their policy indicates it extends further than that. There is political content that Apple not only would be willing to censor, but has already censored. (Worried yet?) The at this point most famous case of political censorship by Apple is of Mark Fiore, who won the Pulitzer Prize for his political cartoons. His app was censored by Apple. Now upon him winning the Pulitzer his app has subsequently been made available, but a situation where someone has to win a major award to overcome Apple’s censorship doesn’t exactly strike me as conducive to intellectual discourse.
Now consider the possible futures. Will Apple censor political apps that one might want to use in your classroom? What happens when Apple goes international with the iPad in education movement? Will the German laws restricting what can and can’t be said about Nazism limit what content Apple makes available? What about in China? Currently this is not an issue because the devices we use are independent from the content (or at least with respect to most computers), the company doesn’t get a say in how I use their device.
This initially might not seem like a big concern, for as many people pointed out on Twitter today, Apple is not going to censor documents that one accesses on the iPad, Apple only restricts what applications you can run on their devices. So presumably one could buy an ebook reader app for the iPad and run any Textbook that is published in ePub through the reader, Apple will have no say in the matter.
But as Dan’s Tweet points out this is a concern. For in the first place many books are published as apps so they will not get a work around. Especially with regard to textbooks which are likely to be published as apps requiring updates every year, following the software leasing model, rather than purchase a song model (textbook industries will love this as it yields greater revenue). Or as many of the educational materials people use will be "rich textbooks" not just ebooks, but packaged content with videos, quizzes, and "interactive content" so just publishing to .epub or .pdf won’t constitute a work around. Imagine the scenario where you want to include this M.I.A. video in your course content about police state violence, and racial profiling. (YouTube already removed this video, so it is not to far fetched too imagine Apple would deem it too violent.)
But take this even a step further, beyond "bookish" content, there is a range of material that I would want to make available to my class which Apple might chose to ban (and I am not even talking about the illegal stuff here). Consider, I have (and probably will continue to) teach Super Columbine Massacre RPG! Clearly this is content someone might find "obscene" or "defamatory" how do I know what Apple’s judgement on this is going to be? Is this really a decision I want to turn over to Apple? Indeed by allowing a locked down device into the classroom, especially if one makes it the center piece of a technology in the classroom movement this is precisely what will happen. Apple will have control over what type of content students can place on these devices.
I realize, as many pointed out on Twitter, that this is a decision many school boards already make, censoring course material, believe me I live in Texas, I get it. But there is something substantially different about a community deciding what is and is not appropriate for its students, and a corporation making these decisions. And, for higher ed, where we are not subject to the same school board politics, this would certainly be accepting a larger set of restrictions than we are used to. Again having one corporation serve as a media hub for both software, hardware, and now content, strikes me as a future we ought to resist.
(I need a "Just Say No to iPad in Education" banner.)
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:27am</span>
|
The following is a summary of my talk, or more accurately, the short written version of my talk, "Burn the Boats," which I gave a little over a month ago at the DWRL in Austin. You can read the post, or skip to the end and watch the videos (which last about 40 minutes) and give the longer form of the argument.
Earlier this year Marc Andreesen was interviewed by TechCrunch on the future of publishing,in particular journalism. Andreesen’s response was, provocatively, "Burn the Boats." What he was referring to was the moment Cortez, fleeing from Cuba, and landing in Mexico, ordered his troops to "burn the boats," preventing any possibility of return. The lesson: don’t defend lost ground, at times there is no going back, and making decisions to insure that one does not consider a return is a good move. Andreesen’s point was that old print based media forms are dead, and it does no good to try and re-envision them for the 21st century, rather journalism institutions need to boldly move to future web based models, giving up on their print based biases.
In a similar regard I would like to suggest that academics "Burn Their Boats" or in this case, more specifically "Burn the Books," making a definitive move to embrace new modes of scholarships enabled by web based communication, rather than attempting to port old models into the new register. Rather than providing the book with a digital facelift for 21st century scholarly communications, academics should move past book based biases which structure scholarly communications and instead imagine and execute born digital scholarly forms, which leverage the evolving digital media landscape.
Let me be clear, I like books, in fact it was my love of books, or more specifically my investment in what books can accomplish that led me to graduate school. My PhD is in English after all. Indeed I collect book, and although I don’t do it much anymore I have at times spent time tracking down and acquiring first editions for some of my favorite works. I am not in fact suggesting that we actually engage in book burning, nothing of the sort, although if I did actually burn some of my books I think it would make moving easier. Instead I am suggesting that we burn our love affair with books, and that out of reverence to the book, we stop treating it as the only or indeed primary means of scholarly communication. Not only are there better ways, but if academia wants to remain (or more skeptically, become) relevant we ought to recognize that the book is no longer the main mode of knowledge transmission.
Faced with the transformation to the digital, the newspaper industry chose to protect a business model, instead of preserving their social function. My fear is that academics are making the same mistake. Now granted this analogy is not perfect, there are contours and shapes, and nuance and details that matters here. They are not a direct equivalence, but I think the underlying logic is the same. It concerns me that academics and intellectuals, with some exceptions, seem to be repeating this mistake, following the digital facelift model, asking how they can continue to do what they do now, but do it in the digital space, rather than asking how what they do has been fundamentally changed in the age of the digital networked archive. Administrators have a tendency to preserve the business function (how can we offer our classes online vs. how does the online reshape the very idea of a class), and academics end up defending the political and ideological function (the importance of books and peer review).
It is probably worth distinguishing here between the materiality of the book, and the ideologies and biases we associate with the book. That is at the most basic level a book is a dead tree processed and bound together in leaves of paper and stained with ink. But, many of the things that we have come to associate with the book are not in fact coterminous with its material structure but rather biases developed over the Gutenberg Parenthesis. I won’t fully develop this idea here but this is what I often call librocentricism, or a book biased way of thinking, where the book stands in for certain prejudices and ideas about knowledge. As a way of thinking about this notice how the word book often stands in for, or comes to mean, the entirety of the matter, as in The Book of Nature, to "throw the book at someone," or The Book of Love. Again there is a lot more to this idea, and I would no doubt need more than a blog post to develop this, but I think it is easy to recognize, even if the full complexity of the argument would take time, that "book" comes to be an epistemological framework for knowledge, not just a material one.
One quick example of how this works, before I move to some ideas for restructuring scholarship: syllabi. A syllabus is often structured like a book, a beginning, a middle and an end, indeed even with chapters (sections), where the traversal (completion of the weeks or reading of all the pages), promises to deliver the knowledge product.
The idea that knowledge is a product, which can be delivered in an analog vehicle is precisely what I want to call into question. What the network shows us, is that many of our views of information were/are based on librocentric biases. If you printed out all the information on the net, roughly 500 billion GB it would stack from here to Pluto 10 times. While the book treats information as something scarce, the net shows us precisely the opposite, information is anything but scarce. Books tell us that one learns by acquiring information, something which is purchased and traded as a commodity, consumed and mastered, but the net shows us that knowledge is actually about navigating, creating, participating (to be sure some people still trade in knowledge, buying and selling secrets, but this is of a substantially different order than the work we as academics do, especially humanities based academics).
Knowledge is no longer print based, nor governed by the substrate of paper, indeed while in many ways we might continue to harbor librocentric biases, as we move away from structuring knowledge to end up on paper, these framing structures will prove less and less necessary, indeed may actually impede on our ability to participate in knowledge conversations.
I am not saying that we should whole sale give up on books, actually perform a book burning freeing ourselves from all of the pages we have in our respective offices, but rather something slightly different, we should start conceiving of our scholarship as if if will not end up in books, indeed it still might, but begin by asking ourselves what would scholarship look like if were not designed to end up in books.
Here are some ideas, or suggestions for this change over:
Stop Publishing in Closed Systems: If I can only convince you of one thing, I hope it will be this. If you publish in a journal which charges for access, you are not published, you are private-ed. To publish means to make public, if something is locked down behind a firewall where someone needs a subscription to view it, it is not part of the "common knowledge" base and thus might as well not exist. Academic journals are treating knowledge as if it is a scarce commodity, it is not, don’t let them treat it as such. If someone wants to publish something you wrote, ask them if you can keep the copyright, license under creative commons, and if they say no, don’t give it to them, and find someone who will. Look for journals which publish only online and only for free.
Self Publish: Publishing and editing are hacks based on the scarcity of paper, no need to carry it over to the new medium. Once publishing was the most efficient way to reach the largest audience, no longer is that the case, so lets get over our publishing fetish. Publishing online allows you to engage a wider audience, both faster and more efficiently than any print based journal. We think of an academics role as presenting polished finished work and ideas, but this need not be the case. We should switch to presenting our ideas in process, showing our work, not just the final product.
End the .pdf madness: A .pdf document is not a web based document, it is a print based document distributed on the web. One of the principle advantages of the web is the way it connects, operates as a network of connections within an ecosystem of knowledge, one can search, copy, paste, edit, link with ease, none of which is true of a .pdf. The .pdf is just a way of maintaining print based aesthetics and structures on the web. In the same way you wouldn’t think of publishing a book without the appropriate footnotes, don’t publish to the web without the appropriate live links.
Get Over Peer Review: Peer Review is another hack based on the scarcity of paper. Given the cost of producing knowledge and the fact that academic journals or academic presses could only afford to produce so many pages with each journal, peers are established to vet, and signal that a particular piece is credible and more worthy than the others. This is the filter than publish model. But the net actually works in reverse, publish then filter, involving a wider range of people in the discursive production. Some of the most productive feedback I receive on my work comes not from peers who have a rather narrow sense of what counts and what doesn’t but from a wider range of people, with a diverse perspective. Why do academics argue for small panel anonymous peer review? One thing we know is diversity of perspective enriches discourse.
Aspire to Be a Curator: I think we have to give up being authorities, controlling our discourse, seeing ourselves as experts who poses bodies of knowledge over which we have mastery. Instead I think we have to start thinking of what we do as participating in a conversation, and ongoing process of knowledge formation. What if we thought of academics as curators, or janitors, people who keep things up to date, clean, host, point, aggregate knowledge rather than just those who are responsible for producing new stuff. Do we really need another book arguing that throughout the history of literary scholarship the important field of ‘x’ has long been ignored. No. But, we could actually use some really good online resources and aggregators for particular knowledge areas.
Think Beyond the Book: Think of the book as one form, not the form. Indeed think of things that move beyond the book. What if you are writing didn’t have to be stable, didn’t have to have a final version, what if you could constantly update, change alter, make available your work. There will be no final copy, just the most recent version. While the constantly in beta mode might concern those who aim for perfection, it can also be liberating when you realize that nothing is fixed, taking advantage of the fluid. What happens when we give up on, or at least refuse to be limited by librocentricism? What if a piece didn’t have to be 20 pages for a journal article or 250 for a book, there are economic constraints that place limits on the size of academic writing, how much better can we be when we get rid of these. Or what would an academic argument as an iPhone app look like?
Let me be clear, I am not saying that the book is dead, in one regard it is already dead, in another it continues to haunt us and will never die. And we should be glad for this haunting there are many features of the book from which we benefit. What I am saying though is the centrality of the book is gone, and academia would do well to recognize this, to move into new directions, new grounds, where many already are. We should not continue to constrain our thinking by this librocentricism which no longer structures or limits the way that knowledge is produced, archived, or disseminated.
(P.S. Below is a photo I took at my visit to The Chronicle last week. Apparently these are all the books they received from academic publishers in the last week (that’s right just one week), which nobody wanted. In other words at an academic institution like The Chronicle, not one reader could be found for any of these books. They were giving them away for free. Seriously, we should stop this madness. Won’t somebody please think of the trees?)
(Below is the full video where I elaborate on the points/ideas above.)
Burn the Boats/Books, part 1 from DWRL on Vimeo.
Burn the Boats/Books, part 2 from DWRL on Vimeo.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:26am</span>
|
Harrisburg University seems to be getting a small amount of press lately for announcing that it would as an experiment block all social media websites for a week (Inside Higher Education Article, Chronicle Article). Facebook, Twitter, MySpace, even AIM and chat features on Moodle will be unavailable on the University network (or more precisely the campus will block the IP addresses of most social networking services, and turn off these features on its own software).
In general I think it can be a productive activity to encourage students to take a step back from relying on social media. I say this not because I think social media is a bad, or even harmful technology, but rather because I think that changing behavior can lead students to certain realizations about whatever it is they are studying. Showing students is usually a better pedagogical method then telling them. I won’t go into all the reasons in detail here, if you want you can check out the longer article I wrote for Flowtv.org on the student saturated media environment, but in short I would say that what seems strange and unfamiliar to us, is normal to most of our students. That is there is nothing particularly strange or unusual to them about Facebook, texting, Twitter, YouTube etc. As an educator one particularly effective tactic, I think, is to take the familiar and make it look strange. Or as Siva Vaidhyanathan explained on Twitter recently, students are like fish swimming in an ocean of media, my job is to get them to notice the water.
So it might seem like I would support Eric Darr, the provost of Harrisburg, and his plan to cut off social media for a week. Except I don’t. Actually I think it is a bad idea (maybe with good intentions, but a bad idea nonetheless). Let me explain.
In short I think this sort of experiment needs to be done carefully at a local level not globally with a broad brush. As Eric Stoller characterized the decision, having the Provost decide the matter for the whole University seems a bit "heavy handed" (Note: the "heavy handed" quote which is attributed to me in the Chronicle article originates with Eric, although I agree with it.) In this instance it becomes an abstracted authority telling his subordinates, what is and is not healthy, or at the least creating an experiment where the participants have no say in the matter. Whether or not it is Eric’s intent the message easily becomes "students cannot live without social media, they should try it for a week." And again whether or not this is the Provost’s intent, it ends up coming off like a "kid’s these days" situation. Try substituting another "batch" of technology to see how problematic this becomes. For a substantial portion of the faculty, dissertations were written on a typewriter maybe we should ban all computers for a week and make graduate students work on typewriters, or we used to communicate in handwritten letters, for a week all communication must be handwritten, or people used to walk everywhere before there were cars, maybe we should have students practice a car free week.
This is not to suggest that anyone of the above couldn’t be a productive project, but I think they would only be productive given the right context. If you were studying urban planning it might be useful to have students not use cars for a week, or if you were studying linguistics and machine technology maybe only letter writing would be appropriate, but without a context I think the experiment is bound to fail, probably creating more frustration and anger than anything else.
In essence Harrisburg (or Eric, it’s difficult to tell) has grouped together a wide range of technologies and banned them all, without really recognizing their difference, and recognizing the differences between these technologies is one of the crucial things we should be teaching. On the first level who decides what is "social media" and what is not, is foursquare blocked? what about last.fm? World of Warcraft? or discussion boards? or heck even blogs with comments? I am not sure that I could decide what is and what is not social media and I am supposed to be an expert in it, how is a school going to decide? Second on the practical level it is near impossible to block all social media sites. Even if you could create a working definition of social media it would be impossible to create an exhaustive list of sites, there are simply too many to count.
Furthermore, how does one even go about enforcing this? A University wide ban is not likely to stop students from using social media, rather what it is likely to do is teach students how to set-up proxies and route around the IP blocking the University is planning on doing (not that this wouldn’t in and of itself be a good thing for students to learn. I wonder how many Tor downloads will happen that week?) Or students will likely just go off campus to access the net, making the ban an inconvenience but not an experience in giving up social media. What is more is that it is likely to disproportionately effect students over faculty and disproportionately effect some students more than others. Faculty members who go home at night, or students who live off campus will be less affected. And what is worse is there is likely to be a class divide here as students who can afford to work at places like coffee shops will access the net there, or students who can afford Smart Phones will just rely on those devices for social networking.
There is one other concern here worth noting, one that I tried to raise in The Chronicle article but which unfortunately came across probably too soft. I think we should start by recognizing that social media isn’t an online form of communication, rather social media is how students communicate. In other words Eric isn’t asking students to give up communicating online, he is asking them to give up a large portion of the way in which they communicate. Imagine if the experiment was to have no one on campus talk to each other? There are actually fairly serious concerns here that shouldn’t be treaded over lightly. For many students their social media networks of friends are crucial to their daily lives, whether as the primary means by which they stay in touch with people or at the most significant level as a medium by which they connect with their support groups. Asking students to give up social media is not just a technical ask, it is a social and psychological one as well, one which I think those who don’t use it as a primary means of communicating probably underestimate.
But it is all to easy to critique without offering a solution. So, here is my solution, how I go about asking students to go on a social media fast.
I do it within a specific class context, making it an assignment. Since I teach social media, media is both the object and means of study, any ask I make is within the context of the class. In the same way asking students to give up cars for an urban planning class would make sense, asking students to give up a particular social media site within the context of class makes sense. This also presents the opportunity to discuss and process the experience.
Create buy in. Just telling students to live without social media seems to authoritarian, explaining to them, again within the context of the class is a far more effective way to handle the situation. If students are bought in to the assignment then they are more likely to do it. An assignment like this cannot possibly be monitored, so you need students to want to willfully do it. Do all my students follow through? No, but a majority do. (Incidentally the person who commented on The Chronicle that I would leave it up to a class vote, sort of missed this point. You can demand a lot of things from students, the one thing you can’t demand is that they learn. Their mindset going into any assignment will greatly determine what they get out of it.)
Make the assignment after, or during studying the object. This again creates context. After discussing Facebook and the way students use it, asking them to give it up for a week will make more sense.
Pick specific social media, not all social media. When I assign students to give up Facebook for a week they are still free to use email, discussion boards, even Twitter. By being specific you get students to pay attention to the specifics of each site rather than treating them all as equal, which they are clearly not. I might have students give up search engines for a day next semester.
Have a specific timeline and a reason for the duration. Make it a challenge.
Recognize that students will be differently affected by this assignment, especially if you are asking them to give up their support networks.
Join them. I never ask students to give up something that I am not also willing to give up.
Have them write about it, during and after. I want them to process the experience, they learn more this way and learn more from each other this way.
P.S. You should also read Eric Stoller’s take on this from a student life perspective.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:25am</span>
|
My most recent pedagogical obsession is not, as you might think, social media fasts, but rather working out ways to effectively create group projects. Honestly I consider this one of my serious shortcomings as a professor. I really as of yet have not created a group project with which both the students and I were happy with the results. Something always goes wrong. This is not to say that there haven’t been good ones (and some total misfires) but I have yet to really figure out the best way to do it. Part of my problem comes from not having this modeled for me in graduate school (we in the humanities are more accustomed to working solo) coupled with my own few past experiences as a student, in which I greatly dislike working in groups. But beyond that I think it is a substantial problem with both the way institutions are designed and with student expectations. It is hard to evaluate students individually (what the institution requires) yet try to hold the whole group accountable. And I struggle with this, because I want to encourage and evaluate students for who they are, but on the other hand I see as part of my job to teach students how to work in groups. I think most of the kinds of work environments they are likely to end up in will require working in groups, and internet projects do to their complexity require groups.
So here is what I am trying this semester for my EMAC 4325, Privacy, Surveillance, and Control on the Internet . . .
The focus of the class is on semester long research projects where each group has a public website/blog covering one aspect of the class. So for the whole semester groups have to work together to produce their project. The project is designed to require a range of skills, design, writing, coding, image manipulation, video and audio editing etc.
I came up with two basic rules for this project:
Everybody in the group gets the same project grade (which is 50% of the final grade).
If you are unhappy with a member of your group, i.e. feel that they are not sufficiently contributing, you can fire them from your team.
I put together these two rules from different projects I saw others do, although neither project put them together. On the first day of class I explained these rules and then handed them out the long detailed sheet which contained all the information on the project. Part of the project, indeed the first thing they had to do was come with community rules which described how the group was going to function, what initial responsibilites would be, and finally what the means by which they could dismiss a member of the group would be. In other words they had to write a group constitution of sorts complete with reasons and methods by which they would dismiss someone. (I did explain that in every case a meeting with me would be necessary, but I did this mainly as a way to make sure the group rules were followed, if a group decides to remove someone then I plan to support them.)
If someone is removed from a group then they become a group of one, responsible for their own project (which frankly is quite a bit of work).
Do I think this will solve all of the group assignment problems? No. But I think this probably represents more realistically how groups function outside of academia, they succeed or fail as a group, it doesn’t really matter if you work really hard, harder than anyone else around, you still need the group (ask Lebron James about this). By focusing on the group I won’t get caught trying to figure out team dynamics and what went wrong, assigning blame (like restaurant wars on Top Chef), instead everyone succeeds, or everyone fails. Simple . . . hopefully.
The next thing I did was get them divided into groups.
This was actually the most difficult part of the class, so far. I wanted students to be able to have a say in what group they joined, so that they were working on a topic that interested them, but I also wanted to avoid people just pairing up with people whom they have worked before and are friends. I also wanted to make sure that each group got a diversity of talent. I contemplated having them pick teams (schoolyard style) but thought that would end up being a bit ridiculous and isolating to the people who were not picked. Instead I had each student write on a one side of a notecard their name, on the other side they wrote the three topics that interested them the most, and then the three skills they would bring to the project, creating anonymous mini-resumes. I then selected one person for each group, and subsequently that person got to pick from the notecards one person for their team. On the whole this worked out, everyone got in a group that interested them, and the talent in every group is pretty diverse, and groups were picked based on talent not prior relationships or popularity.
Overall, three weeks into the semester, I am happy with how the groups are progressing. I have started to give them weekly feedback, always directed at the group rather than individuals. You can see the complete details of the project at the class website, along with links to all the ongoing projects.
I’ll write about this again at the end of the semester . . .
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:24am</span>
|
As many of you know, The University at Albany (the place from which I received my PhD) has decided to close its French, Italian, and Russian departments. There are a range of reasons that make this an uninformed decision; for a rundown see Rosemary Feal’s The World Beyond Reach. More entertainingly, though, Jean-Luc Nancy, Professor of Philosophy at the University of Strasbourg and the European Graduate School, has written a rather snarky critique of Albany that pretty much sums up what is at stake here. Since it is short I have reposted the entire response (with permission):
Choisir entre supprimer le français et supprimer la philosophie… Quel beau choix ! Enlever plutôt le foie ou le poumon ? Plutôt l’estomac ou le coeur ?
Plutôt les yeux ou les oreilles ?
Il faudrait inventer un enseignement strictement monolingue d’une part - car tout peut être traduit en anglais, n’est-ce pas ? - et strictement dépourvu de toute interrogation (par exemple sur ce qu’implique la "traduction" en général et en particulier de telle langue à telle autre). Une seule langue débarrassée des parasites de la réflexion serait une belle matière universitaire, lisse, harmonieuse, aisée à soumettre aux contrôles d’acquisition.
Il faut donhc proposer de supprimer l’un et l’autre, le français et la philosophie. Et tout ce qui pourrait s’en approcher, comme le latin ou la psychanalyse, l’italien, l’espagnol ou la théorie littéraire, le russe ou l’histoire. Peut-être serait-il judicieux d’introduire à la place, et de manière obligatoire, quelques langages informatiques (comme java) et aussi le chinois commercial et le hindi technologique, du moins avant que ces langues soient complètement transcrites en anglais. A moins que n’arrive l’inverse.
De toutes façons, enseignons ce qui s’affiche sur nos panneaux publicitaires et sur les moniteurs des places boursières.
Rien d’autre !
Courage, camarades, un monde nouveau va naître !
Jean-Luc Nancy, professeur émérite d’une vieille Université française (pas pour longtemps).
What’s that you say? You can’t read it because it’s in French? Well luckily for you, despite the best efforts of the Albany adminstration there are French Studies Faculty left. So, you can read it in translation:
To choose between eliminating French or Philosophy . . . what a fabulous choice! Should one rather take out the liver or the lung? The stomach or the heart? The eyes or ears?
We need to invent teaching that is, on the one hand, strictly monolingual - for isn’t it true that everything can be translated into English? - and strictly lacking in all forms of questioning (for example concerning what is implied by "translation" in general and from one language to another in particular). A single language unencumbered by the static [parasites] of reflection would be a great subject for university study, smooth, harmonious, easily submitting to the controls of acquisition.
We should propose eliminating both of them, French and Philosophy. And everything existing in proximity to them, like Latin or psychoanalysis, Italian, Spanish or literary theory, Russian or History. Perhaps it would be wise to introduce in their place, as requirements, certain computer languages (like Java), as well as commerical Chinese and technological Hindi, at least until such languages are able to be completely transcribed into English. Unless the inverse were to happen first.
In any case, let’s teach what is displayed on our advertising billboards and on the stock exchange monitors. That and nothing else!
Courage, comrades, a new world is about to be born!
Jean-Luc Nancy, Emeritus Professor of an old French (not for long) university.
Translation by Professor of French and English David Wills (fair disclosure David was my dissertation director).
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:23am</span>
|
Sorry folks not much here as of late. That is because I have been working on another project.
At any rate for those who are interested on Monday at noon east coast time, I will be participating in a webinar on teaching Writing as Information Arts (sort of a way of thinking about teaching digital literacy).
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:23am</span>
|
Last week I publicly (via Twitter—really what other venue is there?) mentioned that I might be leaving Dropbox. What ensued was a rather lengthy conversation between me and others as to why I would do such a thing. Soon after the conversation started, the folks at @Dropbox noticed and joined the discussion. Why would I think about leaving Dropbox, a service which I often cite as one of the most useful around for educators? One word answer: Privacy. Based on some recent reports, I now have reason to be concerned about the degree to which Dropbox can keep files secure and private. When I expressed these concerns via Twitter the folks at Dropbox responded with some helpful information, and an invitation to write their legal department with any concerns I might have (140 characters being insufficient for adequately addressing the matter. And as I said on Twitter, credit to Dropbox for listening and engaging in a conversation.)
I started to write such an email, and then changed my mind, why not publicly layout my concerns, and let other educators see what the issues are, after all I feel somewhat responsible since I have spent so much time praising Dropbox. Rather than have a private dialogue with Dropbox it would be better to make it public, yes? So here goes.
The Background:
For those that don’t use Dropbox, think of it as an automatically syncing flash drive in the cloud, an excellent way to keep files synced across multiple computers and have them available on whatever device you have in front of you at the time. (Here is the official explanation.) Because of Dropbox I never need to carry assignments, syllabi, or journal articles that I want to read with me, or on a flash drive. These are just stored in the cloud and I can access them anytime the need arises. And this is just the tip of the ridiculously useful iceberg that is Dropbox. If you want more, just look at all the times it is mentioned on Profhacker (or just Google Dropbox uses and see what I mean). Dropbox has become one of the most important services in my media/computing ecosystem. On a scale of one to ten for usefulness and ease of use Dropbox is an 11.
The Problem:
About a month ago I started to see reports that expressed concern over Dropbox security, questions about the encryption being used, and who has access to the files you store on there servers. Basically there are to two sets of concerns. The first is that by design Dropbox is insecure. You can read the whole article, which is mildly technical but amounts to a concern that it would be fairly trivial for a nefarious party to steal one file and thus gain access to all your files without you necessarily knowing. The second is that Dropbox updated their Terms of Service to reflect the fact that they have access to your files if needed. In other words if the government subpoenas Dropbox, Dropbox has the ability to turn over your files in unencrypted form to the officials. (I know what some of you are thinking: Who cares, I am not doing anything illegal? . . . but wait I promise you should.) Both of these issues boil down to the fact that the encryption of your files takes place on the Dropbox servers, not on your own computer. In other words the question is who has the keys to your file(s) and where are those keys stored.
One way to think about this concern is to imagine your files are being stored in a lock box. One way to do it would be to put the files in a lockbox keep the key and send the whole box to Dropbox. In this way Dropbox has no way to unlock the files. But rather than this method what Dropbox employs is a technique whereby you send them your files they place them in a lockbox and give you the key, but have another copy of the key that lets them look in your box anytime they want. Why would they do it the second way instead of the first? Several reasons but I think there are probably two main ones: 1. Ease of use for Dropbox customers. A system where they (the server) handle the encryption rather than one where you manage (the client) has several advantages including a "lighter" Dropbox program on your device since it doesn’t have to handle encryption and the ability to retrieve files for you, even if you forget or lose your password. 2. Dropbox doesn’t want to cross the government.
Dropbox has responded to these concerns with a lengthy FAQ, which I encourage everyone to read. But, honestly the FAQ troubles me, and makes it even more likely that I will seek an alternative cloud service as it leaves many questions unanswered.
My Concerns:
Lets start with the transparency of this issue. What Dropbox is claiming, or appears to be claiming is that this change in the TOS does not reflect a policy shift, but merely an attempt to clarify what has been the policy all along. I’ll take Dropbox at their word on this, but I still have concerns about their wording.
"That said, like all U.S. companies, we must follow U.S. law. That means that the government sometimes requests us (as it does similar companies like Apple, Google, Skype, and Twitter) to turn over user information in response to requests for which the law requires that we comply."
What Dropbox seems to be implying here is that they are required by US Law to have what is known as a backdoor key (the ability to unlock any file) and give it over to the government when served with a subpoena. But this is not actually the case. If Dropbox has the ability to unlock the files yes they have to give that over if they receive a request. But that doesn’t mean that they have to build a system that would allow them to do this. In other words if they didn’t have the ability to unlock your files the government couldn’t ask for that key, because Dropbox wouldn’t have the ability to unlock said files, they could only give over the encrypted versions of the files to the government, rather than the actual files themselves. This is what is essentially the issue in this article, about the government wanting to be able to WireTap the Internet. My understanding though, and I have asked a few lawyers about this, and their opinion was that the current state of the law does not require companies to serve up plaintext files.
Okay, at this point I hear many of you saying that you want this feature, that you want the government to be able to access the files of "the badies," and since you have nothing to hide from the government you are not concerned. Let’s table that for a moment, and I’ll explain in a second why this is a dangerous view, but for now, irrespective of this issue there is a more significant one, which affects every user, regardless of whether or not you feel that you have something to hide from the government: A system which by design enables a third party to decrypt your files, is by design not secure. Or, a secret between two people can only be kept if one of them is dead. A system which by design has a backdoor to enable third party access is vulnerable to a security breach. As a way of thinking about this consider the relatively recent case where a Google Employee was accessing user email and chats. Yes, Google is concerned about user privacy, but any system, no matter how good the engineers has holes unless the user is the only one with the keys. So here is the rub, by trusting Dropbox and their current system you are not just trusting Dropbox but a host of employees. Any system designed like this will have a security breach at some point. It might not be a large one, it might not affect many users, but it will happen, you are just rolling the dice, gambling that you are not going to be the one effected (a fair gamble in most cases). Its not just software that you are trusting, but people, and people are usually the weakest link in any system.
Now just as importantly for me is the type of atmosphere this private-government partnership entails. I realize many of you might not agree with this, and I don’t want to turn this into a big discussion here (a discussion I am more than willing to have in other places), but I prefer to play corporate interests against the government, keep those two forces working against each other, rather than siding against the public. One of the particularly damaging developments we have seen in the web over the last 5 years is the ability of governments to control what happens online thru extra-judicial means, collaboration with companies to curtail our privacy. For me at least it isn’t a matter of having something to hide from the government, but rather knowing that I maintain control. Control of my own data, and the data of others who have entrusted it to me seems to be an essential component of dignity.
But What Do I Care?
You don’t have to imagine that the government would want your information to see some problems here. Let’s imagine that through an engineering problem (a problem with the code), an employee problem (see Google case above), or a deliberate hacking attack, Dropbox files suddenly become available. I actually have a good deal of student work, evaluations, letters of recommendation etc. stored there at any given time. Aside from my own paranoia about data and privacy there is a good bit of data that students and others with whom I work are entrusting me to keep private. Lets imagine that your grade roster is stored on Dropbox and that gets compromised. Once that file is unlocked and passed around there would be no getting it back. Leaving aside what kind of FERPA violation this may or may not be, I can imagine many students who might be harmed by this type of info. Have you stored judicial letters (for plagiarism cases) on Dropbox? I can think of a lot of information that I wouldn’t want out there even if it wouldn’t directly harm me.
Now about 80% of the stuff I store on Dropbox has no privacy issue associated with it, things like journal articles or chapters I want to read, or syllabi & assignments, or my running schedule, or stuff that is publicly available elsewhere like my CV. But there is enough there that I am concerned and looking for other options.
I will also note here that given the recent FOIA filings by conservative groups going after professors that being paranoid about data isn’t a bad thing, removing the option from others to share my data (this is why I use my own email more than I use the University provided one).
It’s true I have become somewhat paranoid here, using a VPN when on campus to ensure that the University can’t monitor my internet use, but I don’t think you have to be too paranoid to see this as an issue.
Questions for Dropbox
Having said all of this I think there are probably several things Dropbox could make clear that would help.
1. How many employees have access to user files? Is there a dual control system (do two employees have to sign off on access, or are there are a certain number of employees who can do so on their own)? Are records kept anytime users files are accessed this way, so that the company creates a clear audit trail? Do employees (and or any contractors they deal with) have background checks?
2. Under what conditions do they give the government data? The FAQ suggests that they would fight these requests if they found them to be lacking in merit. Have they done so? Can they make transparent this process? Hard data on this?
3. What is being done to fix the architecture issues? (Here Dropbox runs into a problem as the more it says about its security the more susceptible it is to vulnerabilities, but the less it says the less trustworthy it seems. Security thru obscurity really isn’t a good idea.)
4. Does Dropbox think it is their legal responsiblity, ethical responsiblity, or both to share information with the US government? Would they do so without a warrant? The policy says "request" what constituents a request?
The Other Options
1. As the Dropbox FAQ suggests the first option is to encrypt your file before it syncs with Dropbox. If you encrypt your files before syncing them with Dropbox, using something like TrueCrypt, nobody else will be able to access them. The disadvantage to this is it makes it such that your files are not accessible on your iPhone, iPad, or Android device. In other words a not so useful option.
2. Use Dropbox only to store public, or pseudo-public information. Again 80% of what I store on Dropbox I am not concerned about so maybe I just only store that type of stuff on Dropbox.
3. Go back to using a flash drive. (Uhh, no thanks.) This also doesn’t let me use it across other platforms (iPad, phone, etc.)
4. Create a partition on my phone that would store these files. They would always be with me, and I could run something like Samba File sharing and Root Explorer. This would make it more than trivial though to access the files. Really I like cloud features.
5. Switch to a different service. Both SpiderOak and Wuala seem to offer services similar to Dropbox which encrypt the files on the user side. Both of these have applications for all the devices I use (iPad, Linux Computer, Android Phone).
6. Set up my own Dropbox type service on my home computer. Sure this can be done, or I can just run a VNC back to my computer and fetch the files I want, but this is less than optimal. There is also an open source Dropbox being developed, called Sparkleshare.
7. Pogoplug. Pogoplug works by creating your own cloudserver at home.
There is one meta-issue here. As the leader in this type of service, many other applications rely on, and provide support for syncing with Dropbox, for example iAnnotate or GoodReader—usability that would be sacrificed by switching services. And as the easiest and most frequently used, Dropbox is the easy one for me to recommend to faculty members who are less than computer savvy.
Right now I am investigating SpiderOak, Wuala, and PogoPlug. I will let you all know what I discover. My preferred option though would be for Dropbox to address the current issues, cause you know I really do like their service.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:22am</span>
|
This weekend The Chronicle of Higher Education published an opinion piece by Michael Morris arguing that in the name of campus security campuses should start data mining all student internet traffic. Or as the not so subtle, fear mongering, almost fit for Fox News title says, "Mining Student Data Could Save Lives." Morris’s article to put the matter bluntly is a phenomenally bad idea. Indeed his argument so ill conceived that it is difficult to know where to begin in exposing the problems. I even question The Chronicle’s choice to publish this piece. Yes, opinions are helpful for generating discussion, but a certain amount of competency should have to be cleared before The Chronicle is willing to co-sign your piece, even if done under the commentary section.
Let’s start by being clear on what Morris is calling for. You have to read through to the fifth paragraph to understand exactly what Morris wants:
"If university officials were to learn that a student had conducted extensive online research about the personal life and daily activities of a particular faculty member, posted angry and threatening comments on his Facebook wall about that professor, shopped online for high-powered firearms and ammunition, and saved a draft version of a suicide note on his personal network drive, would those officials want to have a conversation with that student, even though he hadn’t engaged in any significant outward behavior? Certainly."
In other words Morris is calling not for data mining, as his title suggests, but rather for total surveillance of all student internet activity with an eye towards mining that data. What Morris is suggesting is not only that Universities monitor student email and conversations on University servers and equipment (student email addresses, Blackboard conversations), but all student internet activity. He is talking about monitoring internet search traffic, i.e. what students search for on Google, what students post on any site, i.e. Facebook Wall, blog comments, etc., what students shop for online, i.e. any purchase you make or any purchase you look at making, and even open up and look at any files you have stored (his suggestion that the University would mine a suicide note written and saved on a computer would involve opening and analyzing said file). And I assume, even though he doesn’t mention it Morris would like to monitor and then mine all IM traffic, and Skype calls. Calling this data mining hides the fact that the first step is actually surveillance, collecting the data, where the end goal is then mining what has been collected.
Technologically Morris doesn’t know what he is talking about and ethically he equates himself with some of the world’s most oppressive governments. In short this proposal reads as if it is written by a despotic leader who has spent too many hours watching poorly conceived science fiction.
In the first iteration of this post I wrote several lengthy paragraphs explaining how the surveillance Morris outlines here is not as technically trivial as he seems to make it, and it is obvious from this piece that Morris has little to no sense of how this technology works (someone please explain to him the difference between http and https cause he seems to think that all internet traffic is the same). Morris’s piece argues that technology is a "crystal ball" (his word not mine) that would allow us to predict and control the future. The technology he describes here is neither as trivial nor accurate as he suggests. But ultimately I decided to cut out all of the technical bits which demonstrated Morris’s ignorance (perhaps he has been watching too much Minority Report or Person of Interest) and instead focus on the more important issue: the ethical one. Morris is arguing that the government should monitor, without cause, all the internet traffic of some of its citizens. (Maybe I’ll write the technical stuff later.)
Let’s put it in no uncertain terms: Morris wants total surveillance of all student traffic on the internet all the time. In other words he is calling for the wiretapping of all private digital communications. Since in this particular circumstance, and in many he outlines the students attend a public school, and the police would be the ones doing the monitoring (or at least involved), what is being suggested here is that private citizens have the entirety of their online communications surveilled by the government. And this monitoring would happen regardless of the student, everyone, all students, no probable cause, no reason for suspicion, just surveil everyone 100% of the time. Total state surveillance. Perhaps Morris has a different measure of what is reasonable, but in my America the government is limited in the degree to which it can monitor its populace without a subpoena (I know, FISA, but we can save that for later).
Morris’s logic goes something like this. In rare circumstances a student will commit an act of violence, in order to prevent this we should curtail the civil liberties of all students. What’s worse though is the bizarre logic deployed to justify this type of surveillance. Morris notes that companies already engage in this kind of monitoring (Credit Card companies, Amazon, Netflix, Facebook, etc.)
Let’s take these "justifications" one at a time. Effectively in the first instance Morris is arguing because a small percentage (an extremely small percentage) of individuals might commit a crime we should extensively violate the rights of all citizens. Now Morris lines the argument up by beginning his piece with a colorful fictional scenario: imagine a student "his sweating hands firmly clutched the grips of the twin Glock 22 pistols." These sort of hypothetical, the world is really dangerous scenarios, are often used to justify curtailing liberties, after all who wouldn’t want to prevent the kid with twin Glock 22 pistols. But, in reality it doesn’t work this way. Sure we could limit all sorts of social ills through restricting citizen behavior (let’s start with curfews) but we don’t.
Or put perhaps in terms that would directly apply to Morris. We know from research that police officers are more likely to commit spousal abuse than the average individual. Thus we should in order to prevent the scenario of a cop with twin Glock pistols killing his wife institute a policy of monitoring all cops all the time. All internet activity by all cops should be monitored. We should know if they visit any sites that might indicate violent tendencies. Also we should put cameras in their homes which record how they act at home so that in case they raise their voice, engage in behavior that indicates violence, we could intervene. I am sure Morris would not be for this scenario, but it is the equivalent of what he suggests, the only difference is who is monitored and who is doing the monitoring.
His second justification is that companies do it anyway, so why shouldn’t Universities. I find it odd that we would want to look to these companies for guidance on respecting student privacy, at precisely the moment when their is a large public conversation developing around the degree to which they don’t respect privacy, and that the government should intervene to establish guidelines. Just because students willingly share information online is hardly a justification for violating their privacy, monitoring all of their internet communication. Furthermore the scale at which Morris suggests students should be monitored in no way equates with what is being shared (mostly publicly) in particular online venues. In the first case students chose to share particular pieces of information on Facebook, making them (again mostly) public for others to view, and remain empowered to not share other aspects of their online communication. Private online communication is still possible. Second in the case of corporations students (at least theoretically) willingly enter these relationships with corporate entities, trading privacy for some other benefit. With government monitoring there is no opt out, use the internet to communicate and you will be monitored. Finally the response by these corporations is in no way comparable to what happens with these private companies. A credit card company calls you to verify that you indeed did purchase a $800 dollar pink stuffed elephant, a minor inconvenience, but the government detaining you for hours of questioning because you called your professor an asshole on a Facebook wall, hardly constitutes the same level of inconvenience.
Imagine the depth of invasion this constitutes. Emails about private family matters. Monitored. Concerned about a medical condition, searching the internet. Monitored. Have a drug or alcohol problem, reaching out to a support group. Monitored. Organizing a political protest. Monitored. You name it. Monitored. This is why we have restrictions on what type of surveillance our government can conduct. We should find it a little more than disturbing that Morris’s position aligns him with the STASI, or if you prefer, more contemporary situations despotic regimes: "In order to preserve the safety of our citizens we must monitor all of their communications."
Perhaps people are lulled into believing that this type of surveillance constitutes a minor inconvenience, because one would only be monitoring online communication. But imagine the outrage that would ensue if Morris was suggesting that police begin routinely searching all dorm rooms in order to insure that no illegal items are on campus. Ultimately I would argue that monitoring my online communication is far more invasive then searching my physical property. Heck just knowing what someone has searched for on Google in the last month can often tell you a lot more about them than looking through their apartment.
Morris’s argument is the classic, but severally flawed one, that we should give up privacy to maintain security. As Daniel Solove has argued this is a fundamentally misinformed approach. In the first case, because one rarely achieves security, and in the second because this type of ubiquitous surveillance itself constitutes a serious harm to the community. As Solove points out, privacy is not just an individual good, it is a public one as well. A community without privacy is an unhealthy one. Individuals need control over what type of information is made public (even if they don’t always exercise said right), and what types of information monitoring bodies can collect about them, not only for individual health but for public health as well. A community with no sense of privacy is a dysfunctional one. Imagine a community where all communications are monitored by the government (again this is either very directly what Morris is calling for, in the case of the public university, or by proxy in the case of the private where the institution de facto serves as the local governing body). One doesn’t have to have read Foucault to understand the degree to which severe government monitoring adversely effects the population, 1984 or Brazil will work just fine for this.
Let’s take even the best case scenario that Morris offers here, that we are going to use this technology to monitor all students, looking for ones who might have mental issues. How is this data going to be used? Are the flagged students going to be expelled? are students who the predictive algorithm decides are risks going to have mandated counseling? Will this be permanently attached to their file? Will there be a no-class list equivalent to the no-fly list? And given the issue of liability institutions are liable to err on the "conservative" side questioning any and all students that might pose the slightest risk, for fear that if they don’t they would be liable in the future? (And again, keep in mind the technology doesn’t work this way, looking for "mentally unstable" people is not nearly the simple analysis Morris implies it is.)
Even if this surveillance would work the way Morris thinks it does, it is not even the best way to accomplish what he wants. Rather than actually try and address the larger issues, or develop a more reasonable plan, Morris purposes the "magical" technological fix. Which of course is neither magic nor a fix. Compare this to a plan which would call for increased funding of mental health clinics, building a positive relationship between Residence staff and students so that those with concerns would speak to someone. Sure staffing a mental health clinic is costly, but it is more effective, and what Morris doesn’t want to tell you cheaper than the solution he purposes. As Morris himself admits, "In the aftermath of nearly every large-scale act of campus violence in the United States, investigation has revealed that early-warning signs had been present but not recognized or acted upon." If these early warning signs exist why do we need more monitoring?
But lets be clear, Morris isn’t after safety or mental health. This is about something far more nefarious, this is about control. And to understand this argument it is important to situate this claim within the context of higher education in California where Morris works. The mental health angle here is just a ruse, a rhetorical strategy to convince people that students need to be monitored for community safety. This is something those with power have been wanting to do for a long time, wholesale monitoring of the population, and given the recent tense situations between students and the California system, situations often mediated by the police, certainly part of the story here is a feeling on the part of police that all students must be monitored and controlled all the time.
If you doubt my reading here all you have to do is turn to his paragraph on FERPA. Morris argues that yes FERPA might be a concern, you would be monitoring student’s private conversations, but "luckily" for those who want to monitor there is an exception to the rule that would allow this type of monitoring. In other words Morris treats FERPA as a technical/legal hurdle that can be circumvented not something that expresses a legitimate concern about protecting student privacy. Morris deals with the letter of the law ("look it’s easy to get around") without addressing the reason the law is there in the first place ("protecting student privacy is a philosophically and ethically important community principle"). Notice nowhere in the essay does he recognize that this type of surveillance might constitute a privacy concern (the only mention of a limit is in taking care to make sure that students maintain a right to due process). Student privacy is treated as a hurdle to be overcome not a value to be respected.
I am an educator because I believe college can be an incredibly important step in individuals becoming productive members of their community. At philosophical times when people ask me what I do I respond, "I work with students to help them become the people they want to be." I find it loathsome, and counterproductive to suggest the best way to help students become citizens is to monitor their behavior all the time. What types of individuals would be produced from such a community, a community under constant surveillance?
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:20am</span>
|
Five years ago I wrote, what for some reason, was the first post for my blog that got any sort of attention. Basically it was a run down of the "Top Ten" software tools I use as an educator. At the time I was consistently asked by colleauges what computer "stuff" I used, so I decided to narrow it down to one post and include publish it. Indeed when I first started blogging I thought my entire website would be about tech tools and tips for academics. That roll is now fulfilled in a far better way by so many other sites that I hardly use this site for that anymore, or at least not directly.
But I did think the more than five year anniversary was worth revisiting and taking a look at how my media environment has changed. I actually think these sorts of posts can be pretty useful, as the how of our computer use is often obscured, despite the fact that it is so varied. In my classes I often like to have students talk about what programs, apps, techniques they use as a way to show a diversity of approaches.
The most substantial change I have made is moving away from Apple. Once an avid promoter of their products at this point I am so concerned about the computing environment they are building that it is worth my time to look for something else. I have moved from my main computer running Mac OS to one which runs Ubuntu. Indeed although I still have a Mac laptop I rarely use it for anything I can’t do on Ubuntu (http://www.ubuntu.com/), and am looking to purchase a new laptop soon, one I suspect will not be made by Apple (or if it is, Macbook Air?, I will use it to run Ubuntu). I no longer use an iPhone, which has been replaced by a Nexus S (which I tell people is so much more powerful than an iPhone). The only Apple product I consistently use, and find to be ridiculously useful (more on that later), is the iPad, but with the substantial number of impressive Android tablets coming out, I suspect it is only a matter of time until I migrate away from this product as well. There is a decided shift here to platform independent services, and services which offer greater control, even sometimes at the cost of ease of use.
As such this list has changed substantially. So here, in some sort of rough order is my list of essential pieces to my computing environment.
-SpiderOak I replaced Dropbox with SpiderOak because of security concerns. And haven’t looked back since. Sure SpiderOak is a tad more difficult to set up, but the fact that the files are encrypted on my end and thus SpiderOak has no access to my files makes it more than worth it. I still use Dropbox to share articles or readings with students and store some files, but SpiderOak is my main system. SpiderOak works across all my devices (phone, laptop, desktop, tablet).
-WordPress. Oddly this didn’t even make the list back in 2006, even though my blog was running of WordPress. But now I use it for managing not only my blog, but my main site, and sites for the classes I am teaching (screw BlackBoard), as well as a separate one for my current research project. The ability to quickly roll out a good looking website, that is easy to update, and highly customizable is invaluable. I am always recommending to people that they build an online precense they control to display their scholarly work, and WordPress is the easiest, and one of the most powerful ways to do this.
-iAnnotate PDF. First tablet app on the list. This one is ridiculously useful. I use it to read and mark up PDFs. I use this to both comment on student work (especially grad students and drafts of papers), and to read journal articles. This lets me "carry with me" all the papers I need to read in digital format and still mark them up as if they were paper. Seriously, probably half the time I use the iPad its with this app.
-Instapaper. Throughout the day, I come across various articles that I want to read, but don’t have the time to read right then. Instapaper lets me save these pages for reading later. I actually have a habit of carving out an hour or two to read thru everything I have saved. This also has the bonus effect of not being distracted by articles which may seem like a good idea at the time, but a couple of days later seem irrelevant or only interesting as a distraction. I also use IFTT so that by favoriting Tweets in any Twitter client I am using they automatically get saved to Instapaper. After iAnnotate, Instapaper is probably the app I used most. (Although you can access Instapaper on any computing device.)
-Astrid. I played around a lot (way too much perhaps) with various todo list organizers. But this is the one I settled on. Mainly it came down to interface and cross platform use, coupled with the ability to connect to other services I use. I am still trying to figure out a way to integrate a todo list with voice commands effectively. I might hook Astrid to Prodcteev just to accomplish this.
-WiTopia. This is a paid VPN service. When connecting to the net via an untrusted connection a VPN service is critical. Our University I am sure has one they provide, but I prefer to control my own. A serious advanatage of WiTopia is that I can pick from an elaborate range of locations enabling me to connect to the net with an IP from anyone of a number of countries and getting outside of the "American" centric net (not to mention for watching the BBC). Even on campus I will use the VPN if I want to hide my traffic for anyone of a number of reasons. WiTopia is safe, easy to install, and works across all my devices.
-AutoHotKey or TextExpander. These programs are ridiculously useful. I specify a series of characters and they are then instally replaced another. For example I type "aadd" anytime I want to add my mailing address to something and "aadd" is replaced by my full snail mail address. I use this for titles of books I have to type a lot, or code shortcuts: I never actually type ""<a href=""></a>"." You can also set up these programs to automatically drop in the most recently copied text, or insert today’s date etc.
-gedit. I used to use word processors or Scrivener to write. Now I just use a simple text editing program. Forget making the text look good, that’s for later. Now when I write I work it a very simple text only environment. On the Mac I had switched over to TextMate, on my home computer I use gedit. Seriously 80% of what I write starts off as basic text.
-A Good Hosting Service. The cloud and free services are one thing, but the ability to host your own site and control your own data etc., for me is crucial. Get your own hosting service for your website, set up your own email and stop counting on someone to do this for you. I prefer HostGator. But there are lots of good ones out there.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:19am</span>
|
"Washing one’s hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral." -Paulo Freire
"This is a great discovery, education is politics!"- Paulo Freire
Q:How do you know you are succeeding as a program?
A: When you are up at 4:00am watching a project your students and faculty built spread around the globe, being discussed in languages you can’t even identify.
By now, most of my followers and readers are no doubt aware that a project built here at our Emerging Media and Communications program has been receiving a great deal of attention. Actually a great deal of attention rather understates the case. This, all more or less started on Monday when The Chronicle of Higher Education published a story written by Jeff Young about this project: EnemyGraph. (The actual story was "published" on Sunday but really did not get noticed until Monday morning.)
More on the project in a moment, but for now a brief explanation of how this story spread. By late Monday morning the story was featured on Slashdot, from where it jumped to numerous other tech blogs and ultimately to more mainstream news organizations. By Monday evening EnemyGraph was spreading internationally. And so it was partly due to insomnia, partly due to a rather full schedule that I found myself awake at 4:00am on Tuesday opening up a web browser to run searches on the story, and opening up a separate tab in Tweetdeck to peek at the Twitter conversation. It’s at this point things got a bit unwieldily, EnemyGraph was spreading to so many places around the globe that I couldn’t follow the conversation. When I first tuned in there was a slew of tweets in Russian, followed by a collection in Hindi, then Thai (for some reason it was moving Eastward). I lost count at 20 different languages. This is how I measure success as an educator. It has been covered on ABC, CNN, The Wall Street Journal, Mashable, The Huffington Post, NPR . . . well look for yourself.
I am not writing this to just brag about EnemyGraph, I have a much larger motivation for writing this post. And let me be clear up front, this isn’t even my project. EnemyGraph is a project built by Dean Terry (faculty here in EMAC) and two students Bradley Griffith and Harrison Massey. My role in all of this has been really, really, really, minor; just support, talking with the team about it, sharing my thoughts. But regardless of this the last week has been one of my most enjoyable as an educator, and herein lies one of the big takeaways of EnemyGraph and ultimately the motivation for this post: EnemyGraph is an excellent example of education done right. I’ll take it a step further, EnemyGraph is one of the best examples of education in the age of the digital network I can think of.
EnemyGraph. What? You Want People to Make Enemies?
To understand my claim it’s important to contextualize EnemyGraph a bit here, and explain why I think it is a fabulous project. Again let me stress that this isn’t my work, Dean and the students are the driving force, "the artists" here, I am just playing the roll of cultural critic, analyzing and critiquing the project, albeit from an insider’s perspective. Indeed, I think some of my thoughts on this might even disagree with what the team says about EnemyGraph. (For those who want to read in Dean’s own words what he thinks the project is about, his artist statement of sorts, you can check out his own post about EnemyGraph.)
Let’s face it, there is a serious problem with Facebook, okay actually there are lots of problems with Facebook, but for understanding EnemyGraph there is one serious problem worth focusing on: Facebook has become a private corporate space which dominates our public and civic lives. The privatization of the public sphere, of our civic interactions is a serious concern to me, and other scholars. Let us set aside the issue of data mining or Facebook’s motivations, we don’t have to imagine Facebook as an intentionally nefarious actor to recognize that to many Facebook has become "the internet", having a monopoly on communications the likes of which has never before existed, to recognize there are serious concerns here. More and more our social lives are being played out in privately owned spaces—the implications of this are still being worked out. We need, I would argue, to be concerned about this, focused on critiquing and understanding the ways in which our public sphere is being radically transformed. To be sure Facebook isn’t the only actor here, it’s just the most prominent, and for now the one which concerns me the most. Privatizing the commons of public discourse strikes me as a serioulsy dangerous civic development.
Understand this is a particularly layered problem, one I am always trying to get my students to "see." Students today, as Siva Vaidhyanathan first put it to me, are swimming in an ocean of media, our obligation is to get them to step outside the ocean and "see" what they are swimming in. It is in this regard that I have students in my intro to the major class quit Facebook. I want them to see how Facebook is quite literally engineering social relations, and doing so in ways that students are not totally aware. And this will get us to EnemyGraph. Consider how Facebook allows you to "friend" things, companies, and corporations, to "like" things, companies, and corporations but not to dislike them. Facebook doesn’t want a dislike button because it would both undermine the pleasant (sterile) community they want to have, and because it would undermine the business model, businesses don’t want a dislike button. Or consider how Facebook has altered the very meaning of the word "friend," where "friend" now connotes a different social relation than it did just 5 years ago. Facebook is coding, engineering, setting the rules for what types of relations we can have, with little to no input from us. Yes and the creators of this app are not idiots, the use of the word "enemy" here is intentional. Facebook abuses the word Friend, EnemyGraph just points this out.
Now as an academic I could write about this, try to get people to pay attention to this problem, as I have, and as others have. Of course there is another option which is to stage a project, a piece of art, to perform the critique. There is a long history of this, and I would argue it is one of the most important roles of art (and humanities in general), to produce objects which perform a cultural critique, and more importantly get the larger culture to engage in a conversation about the issues at hand. Indeed many of the projects I admire most adopt this tactic. One could point here to graffiti art (Banksy is one of my favorite artists), street performances (such as "Operation First Casualty"), or groups such as the Yes Men and Critical Art Ensemble, or even specific projects such as Cow Clicker. It is true that I have a particular affinity for the "stir shit up" approach, one that Dean shares, and this is not to discount other approaches, but I find this tactical one particularly effective. This is how I interpret Dean’s quote about social media needing a shot of Johnny Rotten. This isn’t simply a matter of poking a finger at Facebook, the stakes are much, much larger.
It is against this background that I find the reactions of some academics at times puzzling, and at other times downright disturbing. The idea that this was built just to foster negativity or bullying is a shallow reading, one born out of a total lack of engagement with EnemyGraph. But more importantly the idea that academics shouldn’t be working on projects which actively engage in the world around us, that produce and foster conversations about the roll of technology in our lives comes from an academic community I have no interest in being a part of. And lest you think I am being to critical of academia here, it has been interesting to note how the conversations in "non-academic" communities in my opinion have been far more nuanced and in depth, engaging both the object and what it is after.
Not to get too detailed here about EnemyGraph, but this is a project long in the making, not something whimsically produced over night to encourage bullying or trolling online (it has been at least a year in development). This is a carefully thought out, well planned critical performance, an engagement with the cultural of the privatized public space of Facebook. Part art project, part performance, part critique, part technical object, part critique of technology, part network exploit, part strategic operation, EnemyGraph is teh awesome.
I couldn’t be happier for their success, they have 20,000 users, have overwhelmed their servers, and produced so much press coverage I can’t keep track of it all. But that’s not what is the most exciting. The thing that makes me most proud here is what this project says about what type of learning environment we are building here in EMAC at UTDallas.
Conversing About EnemyGraph. The Educational Model.
I won’t go into detail here about what I think EMAC is, or the design of the major, you can read my longer reflection on that if you are interested. But I will short cut and say that the program has two components, critical and creative. That is, we want students to be critically engaged with the networked digital media environment and to be creators of media content, to repurpose social media for their own aims. I think of Howard Rheingold’s insistence that we need to teach students to be "Net Smart." And that’s where this story gets really important to me.
As EnemyGraph took off, started to gain traction as a media event, you could see the story reflected in a general energy of our students. Other EMAC students were excited about the media coverage the project was receiving, excited about the attention EMAC was generating, but here is the important part, they were also talking about the project online across all the various networked platforms. Numerous times I would scan a news story, skip to the comments, to discover that our students were commenting on the story, explaining EnemyGraph, critiquing Facebook, even critiquing EnemyGraph. The students in our program were demonstrating a network literacy, this wasn’t just a classroom education, it was engaging the world through the digital network. Just one brief example. One of our students is from Brazil, and she was explaining to me the conversation she has been having (in Portuguese with the networked spaces in Brazil) about EnemyGraph, how it is received there, and how placing it in a different cultural context changes the dialogue. This is education at its best. And it is not just the conversation online that I see students participating in, in the hall ways, in classes, our students are engaging in a very thoughtful dialogue about this project and more importantly the problem of Facebook. I can only assume that our students are having this conversation all over, both on and off campus now.
And so this strikes me as one of the serious lessons to take away from EnemyGraph: the digital network changes the landscape of educational possibilities. This is something Bradley Griffith pointed out both in his interview with Jeff and in the comment section of The Chronicle Article.
Certainly in the past education has not been merely confined to the classroom, but the network changes the scale and pace of what is possible. The more we can encourage students to collaborate, to see the world as their audience I think the more successful we can be. Gone are the days when we must restrict students to small audiences and performing for us as instructors, now the network makes possible projects and learning environments with massively expanded audiences. Forget the classroom, the local community, or even the nation, EnemyGraph has turned into an international lesson. Sure we can keep asking our students to write papers for us interpreting some 18th century text, analyze some obscure symbolism, and give them an audience of one, or at most an audience of 20 at some conference. But give me one EnemyGraph as a learning project over 1,000 antiquated research papers any day. Or put another way who needs another boring marketing or business plan, go have the students make something.
This isn’t to say that professors ought to adopt the "stir shit up" model that Dean and I both prefer, or even that it is appropriate for every class, but certainly there is a pedagogical advantage to having students actively critique and create media objects that exist beyond the classroom. I am not interested in developing passive consumers, or students who are interested in figuring out "how to make people click ads." I want to help students critique this world, imagine a different one, and help to produce a better one. It is in this vein that I so value and appreciate what the EnemyGraph team accomplished here. But I also teach a class where students are working on civic media projects, one group is attempting to collect coming out stories to help LGBT youth, another working to help animal adoption, and another is boldly trying to raise enough money to send a kid to college. And, these are the things that inspire me as an instructor, that make me feel good about our program, and keep me up late at night thinking, "what’s the next thing we could do?"
So many people who commented on this project claimed this was somehow irresponsible asking "how could a program support this?" or "what kind of professor does this, builds EnemyGraph?" But I have a better question to ask of educators: "Why aren’t you trying to build the next EnemyGraph?"
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:18am</span>
|
10 (Mostly) Simple Steps.
Last month I had the privilege of speaking to the annual Computers in Writing Conference. The organizers were interested in hearing my perspective on open scholarship and the university.
I am not going to recast my entire presentation here, I actually might write it up as an article for something else, but the keynote is a different genre from a blog post and different from a formal essay so just making the written version of my talk available is probably not so useful.
But to briefly summarize I made the claim that increasingly academic interests are running counter to that of the publishing industry. And that while I find recent calls to move towards open scholarship such as the Harvard Letter important, I think that economic justification is not the primary reason we ought to pursue open scholarship. In short I think this is a moral issue. Knowledge Cartels are increasingly controlling, and restricting knowledge production and dissemination. This is happening broadly across our culture but the universities complicity in this, from drug patents to for profit publishing is troubling.
I am not going to rehash the long argument here, if you want you can watch the video. Instead, building on the idea that this is the moment to move to open access, I want to highlight the how-to portion of my talk, the ten steps to breaking up the knowledge cartels.
To me it is simple, we have a bizarre situation where we give away our product for free to these cartels who then turn around and sell it back to us. We give away our knowledge for free the only question is do we want to give it away to the public or to the cartels. Overcoming the knowledge cartels in the academy is simply a collective action problem. That is all we have to do is act together. Acting alone has costs, but if we collectively resist the cartels we solve the problem. To be sure there are complex solutions needed to replace the cartels, but the first step is overcoming them—which is shockingly simple. I offer 10 steps to take to achieve this goal.
1. Creative Commons License Everything. I said at the talk, and I would re-iterate here, this is the most important step, in fact this one simple act of licensing everything we do under creative commons would go a long way to undermining the cartels that profit from controlling our knowledge. Creative Commons essentially bequeaths your work to the commons, insuring that it cannot be locked down. There is a degree of control here where the originator of the work can decide whether or not someone can re-use for commercial purposes or non-commericial purposes only, or whether to allow remixes, or only those who maintain the entiritery of the original. But importantly it insures that the knowledge will enter the commons. We should license everything we do—syllabi, talks, books, journal articles—under creative commons.
2. Publish only via Open Access Sources. This is pretty simple we already give our work away for free, the question is to whom should we give it away. Giving it away to the public produces a better knowledge commons, as serves the public not the corporations. Knowledge which isn’t public, isn’t knowledge.
3. Refuse to Work for the Gated Publishers. In addition to our writing labor which we are giving away for free, academics are providing other labor to these publishers, for free. Stop it. Stop serving on the boards for these journals, stop peer reviewing them. No more sharing with them our free labor. If you get an email asking you to serve one of these rolls, your first response should be, "under what license will this publication be made available." If the answer does not entail some method of open access turn them down and tell them why you are doing it. Recently I received two emails, one requesting that I review an article for a journal and another asking that I serve as a reader for a book. My first question in both cases was, is this open access. In the case of the article the journal issue was, I happily agreed. The book however was not. I turned them down, and made it very clear to them why I was doing it.
4.Actively Support Open Access. The corollary to the prior point is to actively seek out and work for Open Access distribution models. One of the myths about open access is that the work is not peer reviewed, that the quality is less. Of course this is absurd nothing about an open model indicates diminished quality, or suggests that an article cannot be peer reviewed. Indeed there are many models out there (I’ll just recommend Kathleen Fitzpatrick’s book for an in depth discussion of this). So at this crucial moment, when these initiatives are developing they can use the support of faculty, both to lend them intellectual gravitas and simply as a show of support. Serve as editors, readers, peer reviewers, board members.
5. Do this regardless of rank. I think this is the point which during my keynote received some of the strongest push back. I used to think that the best path was to call on full and associate professors to lead the way, moving to open access being too great a risk for the contigent labor, the grad students, and junior faculty. But a few things changed my mind on this. First since this is a collective action problem the greater the numbers the greater the shift, and a sudden move by untenured faculty would signal a subtantial shift. Second the urgency of pressing our case means that waiting for a slow change isn’t really the best option. And finally, I am not sure we can count on the tenured faculty. Faculty who don’t do something before tenure aren’t likely to change after. I am up for tenure this year, we will see how this goes, but as a colleague of mine Jon Becker says this is "a hill worth dying on."
6. Make Open Access Part of A Criteria in Institutional Decisions. If you are on a hiring committee, ask the candidate if they are published in any open access sources. If they are not ask them why not, hire people who are committed to open access, make this part of your hiring criteria. Same for tenure committees and tenure review. For those with this kind of influence get open access mentioned in tenure guidelines, and again reward it during tenure and promotion evaluations. By doing so you will make it easier for junior colleagues to take the risks I outlined above.
7. Make these Choice Public. Part of the way that we overcome the collective action problem here is to publicly commit to Open Access, to not only make the moral choice, but to testify to this choice, making it easier for others to do the same. Sign Petitions such as "the cost of knowledge," or the WhiteHouse.gov petition demanding open access to taxpayer funded research. Write in whatever venue available to you that you are making the move to open access, explain why, encourage others to join. When a knowledge cartel asks you to work for them, as an editor a reviewer or article writer, explain to them why you won’t do it, and then make that refusal public. So when Routledge asks you to review a book (as they recently did with me) tell them no, and then tell everyone that you told them no. This has the double advantage of communicating to the original author that if they want their submission reviewed they should agree to open access up front.
8. Extend this Principle to All Institutional Choices. I realize much of the focus about open access centers around our scholarship, but syllabi, lesson plans, teaching techniques are equally as valuable to the commons. There is no reason things like textbooks need to be expensive. These should be free and open, available to all. The textbook market is perhaps one of the biggest rackets in the academic publishing industry. And for the love of all that is holy stop using proprietary software and other systems in our classrooms that encourage these Knowledge Cartels. Even if this software were well built (which it isn’t) pedagogically and ethically using them is wrong. Ideologically systems like BlackBoard are just another piece of this problem. Go open source with our learning tools, stop letting these knowledge cartels profit from education.
9. Exert pressure on professional institutions. MLA, ASA, CCCC whatever professional organizations you are part of, whatever conferences you are part of, make them aware of your demand for open access, and get them to tak in that direction. Because pressuring these institutions works.
10. Pirate. Stop respecting the rights of the knowledge cartels. The knowledge they have locked down is ours, not theirs, do not recognize their right to it. If someone wants something that is locked down behind a paywall, share it with them, copy, distribute, don’t respect the copyright of these cartels.
Again this is simply a collective action problem, all it takes is for us as academics to stand up and say enough. Indeed any other choice not only harms our collective interest, but more importantly the interests of public which we serve. Open Access is the only ethical choice.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:16am</span>
|
(First a brief disclaimer. Generally speaking I like the MLA, I think its core mission, to advocate for languages and literacy education is an important one. And for those who don’t know my PhD is actually in English, so I feel a certain affinity for the scholars there. And recently the MLA has made a turn to promote what I feel are important issues to in the field, like open access journals. Indeed, I think Rosemary Feal and Kathleen Fitzpatrick (new head of scholarly communications) deserve a lot of credit for taking the MLA in the right direction, but . . . .)
Open-Washing
The Modern Language Association released the Job Information List last week (known as the JIL) , but it is probably more accurate to say they released the database. The term list here refers to the bygone era of analog job lists and publications; now job seekers log onto a website, and view jobs posted by the MLA. Except they didn’t really open up access to the database, what they did was allow those with MLA membership to access the database. In other words if you have a membership or a member of an institution which has a membership you can see the list, and if not, well no job database for you. Just to be clear this isn’t to post jobs, this is merely to see the jobs. In other words the database is paid access.
Last year the MLA claimed that the job list would be open access, as in available to anyone, so to some the fact that accessing the database still required a subscription seemed problematic. Later Rosemary Feal, the executive director, explained that the database was still restricted access, but that anyone could receive access to the .pdf copies of the list, published twice this semester (once in Oct, once in Dec.) So, in short the database is closed, but a published version of the database is available to the public. The MLA website says, "printable PDF files of the JIL are available free of charge on the website." What that section doesn’t mention is that those are not updated as frequently nor available at the same date as the online list. And given the current competitive job market, the ability to access this list in a timely manner is crucial.
What ensued was a heated discussion between myself (and many others who I will let add themselves to this list if they want) on Twitter and Facebook about this policy. To us it seemed an unfair policy (why lock down the list) and a disingenuous claim to state that the list was open (when in fact only a limited version is being made available to the public)-this in my mind is know as "open washing."
So, I am going to call bullshit on this one. This list is not open and MLA’s policy of maintaining a restricted access to the database is unethical (and they know it). Why? Let me explain.
All About the Benjamins
The MLA claims that access to the database is a service which it provides its members. If you join, or your host institution joins you will be given access. In the analog days of job hunting this somewhat makes sense. There was a cost to distributing the list, to printing it an mailing it and getting it in the hands of anyone who wanted it, jobseeker, curious academic, member of the press, faculty members etc. But now given the affordances of the digital network the cost of distribution is trivial, and while not zero is pretty darn close (bandwidth does cost).
So why does the MLA still restrict access. The answer is pretty simple: money. One has to pay both to get a job listed and to access the list. Why? The rational seems pretty clear to me. If the MLA opened up the list, i.e. didn’t require one to be a member to access the list, the number of member institutions would go down. Presumably there are a lot of institutions which would cancel their membership if the list were free. The subscription rates here are confusing, and a bit complicated as the relationship to the ADE and the ADFL make it even messier. But on the about the JIL page you can read about the policies. Looking at the recent 990 though it is pretty clear that membership and subscription to the JIL is a pretty large chunk of income for the MLA. (It’s not clear to me how much of the income is subscription and how much membership. I am not a forensic accountant and I didn’t stay at a Holiday Inn Express last night, but the numbers are large see 4a-4c.)
Clearly the motivation here is for the MLA to make money both by insuring that those who use the list to post a job, and that graduate programs that use the list to help students get placed pay for access. But it also is pretty clear to me that they make more money off the list than it costs them to run it. In other words they are using the list to leverage institutional buy-in. The list serves as a motivation for an institution to join the MLA and support its efforts.
Before we go any further let me say again that I support the MLA. I think it has a worthy mission, that it does a lot of good work as an advocacy group on behalf of professors (see the role it played it resisting Foreign Language Cut Backs). And indeed in the job market the MLA probably plays a good regulatory role, setting norms of behavior that are in the best interests of the candidates (for example institutions have to agree to a certain set of standards and behaviors to use the MLA). So the MLA clearly provides a service to the community. But that doesn’t mean that the ability to access the list is a service that job seekers ought to pay for.
The MLA wants to claim that the list is a service provided to members, join the MLA and this is one of the benefits. But that’s not at all what is going on here. Instead the MLA has set itself up as the primary knowledge broker in the trafficking of information about jobs. What the MLA has is the place that job listers post because it is the place that job seekers (at least in MLA fields) go to find jobs. And because it has this important informational resource of the job list it is able to use it to leverage institutional support (AKA make graduate institutes who want to help students find jobs pay for access and/or membership). This isn’t a service, this is holding information hostage.
To see how this is the case imagine the MLA job list went poof tomorrow, as in completely disappeared, as in wipe the site off the internet, burn all the print editions, the JIL is no more. What long term effect would this have on job seekers? None. Why? Because the jobs listings would move elsewhere, The Chronicle, Inside Higher Ed, HigherEdJobs, heck even Monster.com. All places where job listers have to pay but job seekers would have free access. In other words to a large degree job seekers would be better off if the list just disappeared. Consider also how other professional organizations such as the American Historical Association and the American Mathematical Association provide free access to the list for job seekers, only charging to list a job.
Why then is the MLA locking down this knowledge? Indeed the MLA recently has made moves to open access knowledge, giving authors open access rights over articles published in the MLA. So it is odd then that the MLA would chose to lock down information they didn’t even produce. In this case the job ads are all authored by institutions, the MLA merely curates the database. What is particularly vexing about this situation is as the cost of distribution has gone down the price of access has gone up. Clearly the MLA makes more money than it spends on this list, it is using the list to fund operations, using its position and control over the list to force other institutions to pay the rates it dictates both to list jobs(fine with me) and to access (a far more spurious endeavor).
I get it, the MLA is invested in preserving the current job ecosystem where it serves as the broker, and collects on both sides, being a knowledge cartel is a good racket. But that doesn’t make it ethical or justifiable.
So when you raise this, what is the response of the MLA? Well they will claim that everyone who needs access has access. But as many have pointed out, there are hosts of contingent faculty, faculty who have been away from the market, who aren’t fresh out of grad school, who might not have such easy access. It isn’t precisely clear from the MLAs site that one is supposed to be able to access the job list via one’s graduate institution into perpetuity. Or the MLA will say that they will get access to anyone who needs it. But this also isn’t clear. Why not display a button, icon, or text that says "don’t have access, click here." That would let job seekers without access fill out a form and gain access, sponsored by the MLA. The MLA is counting on the idea that graduate institutions will provide access to everyone, which is clearly not happening. For years there has been a sort of informal trading among individuals, where those with access share with those who don’t have it.
But more to the point is it is ridiculous to on the one hand require paid registration, and then on the other hand say everyone has access. Paid access is by definition a gate keeping function meant to restrict access. The logical fallacy here is large enough to drive a truck full of rhetoric professors through. Either everyone has access, or the MLA gatekeepers the list. Right now they are acting as gatekeepers even if they want to claim that everyone has access. (P.S. Doesn’t the fact that the MLA describes the .pdf list published in Oct & Dec as "open" serve as an admission that they recognize that the online database is not open?)
I realize the MLA’s business model is based (in part) on profiting from this list, but revenue is not an excuse to act unethically. But even more the MLA is missing an opportunity here. A list which is open to all job seekers is far more valuable in the long run than one that is closed. If you are a job lister you want your job to be published to the widest possible audience. An open list with a larger viewership is more valuable to those listing positions, and as they realize this they are likely to move to posting the jobs in places that aren’t locked. If you were a department and only had the financial means to post the job in one place, would it be in a list with limited viewership or one with open access? And increasingly job openings are becoming open by proxy, as places like the AcademicJobWiki or social media are used to share jobs, nearly all jobs are listed on the home institutions website. So, what the MLA does is curate these jobs, and if someone else can do this for better, for cheaper, and provide access to more jobseekers the MLA is rapidly going to be obsolesced. And if the MLA ceases to be the place where job seekers go, and hence job listers go they will lose leverage over recommended hiring practices etc. (a place they are providing a service).
A Better Way Forward
From a strategic point of view it makes more sense to play the long game here and open up the list, serving as the aggregator for all English jobs. Charge job listers, not seekers to have the job listed. Open up the database, heck even make an API so others can use the data. Imagine what could be done, what job seekers could do: Create a mash-up of the data with google maps so you could see ads by geographic location, or someone could write a program that would allow you to look for jobs as an academic couple (jobs in nearby geographic areas), something Inside Higher Education already does.
Let me re-iterate, a business model is not justification for closing access, not merely because this is an outmoded model likely to lose purchase in the coming years, but because it is a model that often hurts the most marginalized of our community. Academic knowledge exchange is changing, and with it should change our professional practices, and advocacy institutions. No one understood this better than the MLA when they hired Kathleen Fitzpatrick to be the director of scholarly communications. Kathleen is most famous for a call to perform digital scholarship to leverage the digital to alter our institutional practices lest we face obsolescence. Which I think many would agree is what is going on here with the job list.
(Copied from a Facebook Discussion on Scott Eric Kaufman’s Page):
Note: Subsequent to this discussion, a group of english academics launched mlajobleaks.com, which makes the job list available to anyone. Let me say that although I have been accused of being the mastermind behind this project I am not. While I might know the parties involved, and might even have provided "material support." I shouldn’t be given credit (or blame) for this. However I encourage all the parties involved there, and hope that the list continues to be publicly available.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:15am</span>
|
(I probably don’t have time to fully develop this point, but here goes . . .)
It should come as no secret to people that I am an advocate of open access, that was, after all, the focus of my talk at Computers in Writing, and a frequent subject on this blog. Indeed many have accused me of being militant about open access, a moniker I’ll gladly wear. Not to get to far off topic here, but I think this is one of the defining issues of our time, the ability to which we recognize knowledge rights as human rights, and resist the current trend to comodifying and restricting knowledge access.
Which brings me to the article in The Chronicle, covering my previous post, and MLAJobleaks. Which has kicked off more discussion about the Job Information List (JIL), especially on Twitter.
Let me be clear, once again, I like the MLA and as many have pointed out the last five years the MLA has made many positive steps towards recognizing the changes the digital both affords and demands of institutions like the MLA. But let’s consider the long game here, 10 years from now what is the future of the JIL, I am going to predict one of two things. Either the JIL is open (as in really open not pseudo occasionally pdf open, but the database is just free for anyone to look at) or the JIL disappears. Why disappears? Because other options that are free (to access) and present the information in a more useable format are going to replace it. I can even imagine a whole new multi-disciplinary job list separate from The Chronicle, InsideHigher Ed, or Higher Ed Jobs popping up. There is probably even a way something like Interfolio could host one of these that would put all the others out of business . . .
So, knowing this the real question is what is the future of the list. That is how does the MLA avoid being obsolesced in this transition. And here’s the thing, I think the people at the MLA know this, know that transition is coming and they have to do something about it. But they are caught, as the article points out, in a rather difficult position, where the organizations revenue stream is tied to this obsolete business model. They have to transition, but doing so is difficult. The question then becomes how to do this. And again Rosemary knows this when on Twitter she pleads for patience.
But you see this isn’t just about the future, rather it is also about the present. It’s easy perhaps for me to have patience, I have a tenure track job, I am not on the market. But the people who are currently on the market don’t have that luxury. Saying change is coming, is nice, but that doesn’t help the people who need it now. So the question becomes two fold 1. How to transition effectively. 2. How to do so in a way that implements stop gap measures in the meantime. For what it is worth and so I don’t seem like someone who is just needlessly critiquing, here are some suggestions (recognizing that I don’t have all the information and nuances of how the JIL operates internally at the MLA and who gets to make those decisions).
1. Start by being honest, recognize that the list is currently not open, and that there are people who do not have access. Claiming it is open is just open washing. Explain that the reason it is closed is a business decision and that the MLA is committed to in the future making a correction to this and making it open.
2. Provide a time line. Be concrete, give people a sense of how long till the list will actually be open. One Year? Two Years? Saying that it will be open in the future doesn’t help.
3. Provide a stop gap measure for those who currently don’t have access. On the login screen and on the about JIL page, prominently display the above information, along with the ability to get access if you currently do not. For example, let users without access fill out a form with name, email, institutional affiliation, and request access. Have those requests moderated and approved. Giving everyone access to the database within 24 hours of request. This doubles as providing info to the MLA about the people who currently don’t have access. Provide the above info on this page, and this page. And doing a press release about this would help as well.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:14am</span>
|
In conjunction with the keynote that I gave at the Computers in Writing Conference this year, I wrote an article for Enculturation: A Journal of Rhetoric, Writing, and Culture. For those who are interested this piece articulates, in a slightly different manner, why I think that Open Access issues are fundamental to the work we do in the academy. In this piece I argue that the conversation about Open Access is just a small part of the larger discourse surrounding knowledge rights and the current effort by a few parties to restrict the flow of culture and knowledge.
David Parry
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jun 09, 2016 06:14am</span>
|