Blogs
(I haven’t figured out how to embed images in a Google Plus post so they show up where I want, rather than as a gang of photos at the bottom. I also haven’t posted here in a while, so I thought I’d ignore the figuring and sneak in some posting.)
Here’s an easy way to save items from your Google Plus stream to Evernote.
Step 1: Get your Evernote email address (the one Evernote assigned to you when you signed up.)
Sign onto Evernote.
Click Settings.
At the bottom of the Settings page, you’ll see Emailing to Evernote.
That’s where you’re find your Evernote email address.
Step 2: Create a new Google+ circle. (I named mine "Evernote." You go wild like that, too.)
Step 3: Click "add a new person." Enter your Evernote email address.
Enter your Evernote email address.
Step 4: Enter a name for this new "person."
Enter a name for the Evernote email address
Step 5: You’ll see the new person in the new circle. (You can add others, but I didn’t.) Be sure to click "create circle."
Create the new circle.
That’s it for setting up the circle. Here’s how you use it:
When you find an item in your Google+ stream that you’d like to send to Evernote, click the Share button, then select your Evernote circle. (I made Evernote the first in my list of circles, mostly so it’d show up first in the screen shot below.)
Sharing an item in your stream
Google+ reminds you that someone in that circle isn’t yet on Google+. They mean "your Evernote email isn’t," which is true. You can share the item with additional people or circles, but I’m trying to stay simple here, so I just click Share.
Confirmation (part one)
I don’t know if Google+ is being solicitous or just fretful, but when you do click Share, you’ll get a second reminder that someone you’re sharing with isn’t on Google+ and will have to settle for email.
Confirmation (part two), or, are you sure you're sure?
Within a minute of my having shared the item in Google+, Evernote had it in my default notebook.
How it looks in Evernote
The only quibble I have here: the item received by Evernote comes from me — I was sharing stuff in my stream with Evernote, right? And so, if it’s an item that someone else posted (one that was in my stream, but not originally from me), there’s no indication in Evernote of who originally shared the item.
If I click that "view or comment" link in the Evernote note, I will see the item as it originally appeared in my stream — with, in this example, a link to Jane Bozarth, who originally shared the item.
Back to the source
I’m grateful to Beth Kanter, who’s shared a number of useful Google+ tips, and to Vikki Baptiste, whose comment on one of those tips led me to search for the details of how to do this.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
About a year and a half ago, I decided to try losing weight by following the Weight Watchers program that my wife had enrolled in. After a few months, I began to view weight management as a kind of performance improvement project (see this post and this one).
(Here on my Whiteboard, I focus mainly on topics like workplace learning and performance improvement, areas I’ve worked in for decades. No one in his right mind would pay me for advice on cardiovascular health, weight-change dynamics, or the physiology of nutrition and exercise. I’m extrapolating from my experience to make a point about accomplishments at work, not telling people they should eat less or exercise more.)
I’m no longer such big deal
Although I didn’t say so at the time, my ultimate goal was to lose 60 pounds, 50 of them in the first year. Some 20 months after I started, I’ve lost 43.
You could say "that’s great!" Or you could argue I’ve fallen short of my goal. I’ve felt especially frustrated by months-long stretches where I didn’t seem to lose any weight at all. This in spite of what I think of as the bank-account approach to weight: there are 3,500 calories in a pound, so reducing your daily intake by 500 calories should have you losing a pound a week, give or take.
The New York Times recently ran Why Even Resolute Dieters often Fail, in which Jane E. Brody reported on a study by Dr. Kevin D. Hall and his associates. The study, which appeared in the August 27 issue of The Lancet, makes a number of striking points. (By the way, that link to The Lancet leads to a summary of the study. For the complete study, use the free registration option at the bottom of the summary.)
Among those points:
That 3,500-calorie model leads to "drastically overestimated expectations for weight loss." Overestimated, as in predicting "about 100% greater weight loss" than the model that Hall and his colleagues set forth.
Weight loss requires much more time than many people expect (and more time than many diet-plan promotions imply).
Although my 60-pound goal is reasonable for me, Hall’s study suggests I’ll see only "half of the [desired] weight change being achieved in about 1 year, and 95%…in about 3 years."
I’ve read Brody’s article several times, and gone over the Hall study in detail; they helped me understand my own situation. More to the point here, they offer me an opportunity to compare weight management with improving performance at work.
Training is like dieting: not a bad way to start
When I say "training," I’m usually thinking of a deliberate effort to close an existing, important gap between current skills and those required for a newcomer to achieve acceptable results in the workplace. I’ve worked on lots of projects where such training made sense for people like reservation agents, field salespeople, and health-claims adjustors.
What I think these projects have in common is that it was possible to help people gain new skills so they could produe acceptable performance in a relatively short time. They aren’t going to be master performers right away, but they’ll be good enough for now. And they’ll be more likely to improve in the future, because they’ll no longer be complete novices.
What such workers tend to have in common is that they have lots in common: they do similar work, they have similar job-relevant experience, they have similar skills, and they lack similar skills. Often they’re in a few physical locations (like, say, central offices or reservation centers), or the organization can assemble them for training (classrooms, workshops) or assemble training for them (online learning).
As for the skills they need to acquire, those are predominantly procedural: how to check availability, how to manage customer accounts, how to conduct intake interviews.
How is this like dieting? If you’re overweight (e.g., have a BMI over 25) or obese (over 30) and you’d rather not be, there are lots of approaches you can take at the outset. Noting your caloric intake and decreasing it, so that you’re not taking in as many as you expend, is one approach that may be good enough for starters. If you don’t have other serious health issues, and if a principal cause of your current weight is a caloric imbalance, then a deliberate reduction in overall calories-a diet-will likely produce results.
Don’t just take my word for it. "All reduced energy diets have a smiliar effect on body-fat loss in the short run," Hall’s study says. "The assumption that a ‘calorie is a calorie’ is a reasonable first estimation…over short-time periods."
Even in that short term, you have choices that are more effective and choices that are less so. For example, the real-world Mayo Clinic Diet (as opposed to the "miraculous," grapefruit-laden one) for example, will likely produce better results than the kind of "diet" that has you eating nothing but rutabaga and rockfish.
To me, that’s analagous to the difference between "any training is better than no training" and training based on task analysis, needs analysis, and effective ways to help people learn.
From apprentice to journeyman (Deterline was right)
Thus far it seems that Brody, Hall, and I are in agreement, which is pretty classy company for me. It doesn’t seem to matter much how you start on weight management. Many different paths will produce results that are good enough in the short term.
In the workplace, though, short-term thinking rarely pays off long term. Likewise with job-related skill: good enough for a novice, after a while, isn’t good enough. If you think of the newcomer to a job as an apprentice, you want him or her to eventually move to the journeyman level: more skilled, able to deal with a wider range of problems, and competent in skills that are not simply procedural.
That’s not easy. As Bill Deterline once observed, "Things take longer than they do." Part of the path from apprentice to journeyman is learning to recognize and deal with complexity. In the weight-management world, here’s some of the complexity revealed by Hall’s study:
When an overweight person begins consuming fewer calories than he expends, he loses weight-but the rate of loss slows as the ratio of fat to lean in his body changes. (Weight loss is not linear; steady progress is unlikely.)
The same increase in caloric intake will result in more weight gain for an overweight person than for someone not overweight-and for the overweight person, more of the gain will be body fat. (You risk regaining, and you’ll regain quickly.)
Here’s how Hall’s study suggests you think about goals for weight loss:
We propose an approximate rule of thumb for an average overweight adult: every change of energy intake of 100 kJ per day will lead to an eventual bodyweight chage of about 1 kg (equivalently, 10 kcal per day per pound of weight change) with half of the weight change being achieved in about 1 year and 95% of the weight change in about 3 years.
How does that rule applies to my original goal? Let’s assume I was consuming just enough calories to maintain my starting weight. Yeah, let’s assume that. To lose 60 pounds would mean:
Reducing my intake by 600 calories a day (a kilocalorie is the scientific term for what dieters call a calorie), thus…
Losing 30 of those pounds in the first year, and in theory…
Losing 58 pounds-by the end of the third year.
From Hall’s viewpoint, I’m on track-I’m more than halfway to my goal, and I’ve managed to maintain that loss. In a sense, I’m no longer a weight-management apprentice.
What happens after a good start
I said that training is like dieting. But I’ve implied (and I’m now stating outright) that most of the time neither one is sufficient for long-term results. "Diet" in the traditional sense is a short-term planned restriction on caloric intake in order to produce weight loss. "Training" in the traditional organizational sense tends to be a group-focused, short-term effort to provide people with mainly procedural skills that they currently lack, in order to produce acceptable results on the job.
Just in case it’s unclear, I keep harping on "acceptable results" because if training doesn’t relate to on-the-job accomplishment, I don’t quite get why the organization bothers. I keep harping on a lack of skill because if people already have the skill needed but the organization is "training" them anyway, mostly what people learn is that the organization isn’t all that bright.
The Brody article and the Hall study reinforce what I think of as a movement from losing weight to maintaining health. On the job front, it’s like the difference between a hotel employee’s using the hotel reservation system correctly and that same person successfully resolving a customer service problem.
Even entry-level positions involve some judgment, some decision-making, some degree of tacit knowledge. You can’t train for these things specifically; you need to develop models, offer examples, offer opportunities to practice and reflect.
Thus Hall’s 3-year timeframe is one tool that an individual can use to set his or her own expectations regarding the rate of weight loss and the likelihood of plateaus, along with similar research-based principles like these:
We can’t estimate a person’s "initial energy requirements" (daily caloric need) without an uncertainty of 5% or even greater. (Your reduced-calorie target is only an estimate.)
People are often inaccurate in describing or recording their food intake, either before or during a weight-loss program. (Your munchage may vary.)
As Brody points out in her New York Times article:
Studies of the more than 5,000 participatns in the National Weight Control Registry have shows that those who lost a significant amount of weight and kept it off for many years relied primarily on two tactics: continuing physical activity and regular checks on body weight.
How about that? Behavioral change, the specifics of which vary, the results of which are higher levels of caloric consumption. And a monitoring system to track data and assist in further analysis.
(I weigh myself at the same time every day that I’m home, and have done so for 20 months. Not only does the momentum of the practice itself carry me along, but I have a good sense for what the typical variation is. Of course, if I’ve gained weight, that’s just a fluctuation, but if I’ve lost weight, that’s progress. You go with the evaluation system that makes the most sense.)
I do think there’s a role for formal organizational learning (in my mind, a much better term than "training")-though it’s a narrow role, in the same way that diet-as-restriction has a narrow role in managing overall health. Both may in certain circumstances be good enough to start with, but both are likely to fall short over time.
In other words, I believe that letting new hires figure out the inventory-management system for themselves is probably a suboptimal approach. You’re deluding yourself, though, if you think you can procedurize your way to workplace mastery . If you’re trying to increase your organization’s effectiveness, you have to do better than telling people to eat more grapefruit.
CC-licensed images:
Balance-beam scale by wader.
Car-hire image by Send Chocolate (Tina Cruz).
Nighttime road by Axel Schwenke.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
Lena H. Sun of The Washington Post, who often reports on health-related topics, has an article in today’s paper about the use in medical training of "standardized patients" — healthy people portraying patients. (Here’s how Johns Hopkins Medicine describes its standardized patient program.)
Developing the capabilities of doctors, nurses, and other practitioners is a clear example of complex learning. You have a wide range of skills. Some are primarily procedural: when you draw blood, do it like this; when you’re checking vital signs, do it like that. Follow this process for obtaining and recording data.
Most of what we think of as medical training, though, involves skill for situations where there’s no single correct approach to a given problem. So the standardized patient is an individual who’s portraying a particular type of patient-in other words, someone who’s acting as a realistic learning task.
Many [of the standardized patients] are actors, but actors don’t always make the best patients, clinical directors said. Improv is not allowed. People trained to portray a particular type of patient must work from the same facts and deliver responses in the same way to the students examining them.
"They can’t overact," said Kathy Schaivone, clinical instructor and director [of the Clinical Education and Evaluation Laboratory] at the University of Maryland at Baltimore. "If I can’t guarantee that all five will cry, the ones that I know that can [cry], I have to ask them not to."
(Here’s an overview of the standardized patient curriculum at U-Maryland Baltimore.)
One challenge for the standardized patients is to provide a structured debriefing: "Did the student palpate the sinuses? Listen to the heart in all four places? Wash hands before and after touching the patient?"
In this setting, I see two interconnected sets of skills:
Those needed by the medical practitioners to relate to patients, interact with them, and arrive at a reasonable diagnosis based on limited information.
Those needed by the standardized patients in order to believably and consistently portray someone with a particular condition.
Behind both of these, of course, is an intensive effort to design, develop, and implement the training. Beyond the somewhat obvious (what conditions are both useful to have portrayed and suited to the standardized patient approach?), there’s the multilevel skill required of the patients: how do I portray the condition? What do I share readily? What do I tend to withhold? What am I incorrect about?
In addition, the patients need to debrief the students, both via checklists and via face-to-face feedback. Program directors like Schaivone, meanwhile, need to monitor the performances of both the patients and the students.
To illustrate the complexity of behavior, the online version of Sun’s article has a link to this May 2011 article on how doctors struggle to show compassion, by Manoj Jain, an infectious disease specialist and professor at Emory University.
‘Standardized medicine’ image adapted from the CC-licensed original by Ben Weston (Tek F).
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
In the few days since my last post, I’ve spent time thinking about how people get better at producing results on the job. That’s a bit of a paraphrase, but "how people learn" is too broad for what I usually end up working on. My projects vary widely, but what they have in common is the client’s desire to improve what people accomplish.
I believe that less and less of that improvement will come from the efforts of traditional, corporate training and development. (Note that calling yourself "Organizational Learning" isn’t the same thing as having people in your organization learn.) I do think there’s a role for planned, structured efforts to help people acquire and improve important skills — but it’s like the supporting role of the earl of Exeter in this clip, rather than the leading one of his nephew, King Henry (whom the king of France refers to as "our brother England").
Some of the skills that learning professionals have specialized in — analysis, design, structuring, and so far — are moving out of their control, because other people need to apply those skills and can’t or won’t wait. This is a topic I’ll pick up again in 2012. I’ve been considering what I know that’s effective and thinking about how to enable other people to be effective with that knowledge. Like, for example, how to build job aids.
One way to look at a job aid:
It’s information external to you (rather than inside your head)
…that you apply on the job (rather than, say, reviewing beforehand)
…to achieve acceptable results
….while reducing the need to memorize.
So in part this last post of 2011 looks ahead to what I’ll be working on in 2012. And in part it’s a reason-as if I needed one-to (re)post my explanation of Robert Burns’s most famous song, one you’re likely to hear this weekend. Auld lang syne is a Scots phrase. Literally, it’s "old long since;" it means "the days that are past," and it has a sense of "the things that we shared."
Even if you decide not to bother with my chart, you ought to take the time to listen to Eddi Reader’s singing. The video is from the opening of the new Scottish Parliament building in 2004. In the first half, she solos with a traditional melody. In the second half, the attendees join with a version you likely know better.
What Burns wrote
The gist
Should auld acquaintance be forgot,
And never brought to mind?
Should auld acquaintance be forgot,
And auld lang syne?
These are rhetorical questions:
- Should we forget old friends and never think about them?
- Should we forget old friends along with everything that’s past?
For auld lang syne, my dear,
For auld lang syne.
We’ll tak a cup o’ kindness yet,
For auld lang syne.
Not at all-in fact, we’re going to have a drink together for the times gone by.
We twa hae run about the braes,
And pou’d the gowans fine;
But we’ve wander’d mony a weary fit,
Sin’ auld lang syne.
We two have run along the hillsides
And picked the lovely daisies together-
But we’ve wandered many a weary foot
since the times gone by.
We twa hae paidl’d in the burn
Frae morning sun till dine;
Now seas between us braid hae roar’d
Sin’ auld lang syne.
We two have paddled in the stream
From dawn till dusk
But broad seas have roared between us
Since those times gone by.
And surely ye’ll be your pint-stowp!
And surely I’ll be mine!
And we’ll tak a cup o’ kindness yet,
For auld lang syne.
(I know) you’re good for your drinks ( "be your pint-stowp" — "pay for your tankard" ), and you know I’m good for mine. We’ve still got that drink to share for the times gone by.
And there’s a hand, my trusty fiere
And gie’s a hand o’ thine
And we’ll tak a right gude-willie waught
For auld lang syne.
So, here’s my hand, my trusty friend
And give us (= give me) yours
We’ll take a good, hearty drink
For all the times gone by.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
Dave’s Whiteboard marked its fifth anniversary last month. (No, I didn’t notice, either.) You might not think it, given my recent output, but my Whiteboard means a lot to me — so much so that whenever I think about changing the theme (the package of files that controls the appearance) I end up considering one that looks much like what I’m currently using.
Sticking with what I’ve had has more and more often meant I run into technical problems. My current theme is out of date in several ways-for example, it’s not widget-aware. That means is that I can’t take advantage of simple ways to customize and control the appearance.
I’ve occasionally written several posts on a single topic (a series of posts). At the time I used a WordPress addon (a plugin) that automatically added previous/next links so that a reader could work through a series without worrying about date or about intervening but unrelated posts. That same plugin created a table of contents as well, so you could tell where you were in the series.
That plugin stopped working a few months back; I have no idea why. The effort to manually input the links-to hard-wire them, so to speak-was more than I was ready to expend. Still, I plan to write a series or two in the coming months, and I wanted to have a low-maintenance way to present all my series.
So this past weekend I started experimenting with the Organize Series plugin. I tested to see if it could link the three posts in my series about the book Improving Performance, by Rummler and Brache.
And it could. What’s more, with a $15 add-on, I’m able to use a little bit of code and automatically generate a list of posts in a series, like this:
Improving Performance (the book)
Rummler and Brache: Improving Performance Three levels of performance Process is a verb, output is a noun Dirt in the performance engine
I had to do some tinkering, and I had to purchase a $15 add-on for the plugin, but I’m content so far: I’ve accomplished my short-term goal of making each of my series work like a series again-without a lot of hand wiring.
That list of posts in the Improving Performance series, for instance: to make it appear here after installing the Organize Series plugin and the add-on, I inserted the following code into my post:
[ post_list_box series=65 ]
Enough WordPress mumbo-jumbo. I’m going to revisit this from the perspective of learning on the job. My hunch is that there’s a kind tradeoff that a person’s willing to make when he has a problem to solve (or an opportunity to seize). What going into figuring worth is the amount of effort expended, and the value of the results… as seen by the person with the problem or opportunity.
CC-licensed photo by Craig Bennett / theclyde.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
When I read about the Organize Series plugin for WordPress (a focus of Monday’s post), I thought, "This could do it."
No I didn’t. I don’t know about you, but I rarely think to myself in complete sentences. Phrasing like this is how we capsulize a more complex experience. What I believe was going on at the time was something like this: I had a situation I wanted to change (the way I used to manage a series of posts here on my blog no longer worked). And the Organize Series plugin at first glance looked like it could accomplish at least two things:
Provide automatic navigation between posts in a series (so I wouldn’t have to hard-wire the links).
Display a list of all the posts in a given series (for me to use as a summary or as a table of contents for the series).
If I’d thought about it longer, I might have articulated another goal: have some way to list all the different series I have. But I’m not usually that strategic. Still, what I came up with (provide navigation, display a list) acted as my critical-to-quality elements. CTQs were widely used at GE when I worked there; I use that acronym partly tongue-in-cheek and partly to highlight informal criteria.
So, I put Organize Series to work, and within 10 minutes I had automatic next/previous navigation for posts in a series, along with an indication that they were part of a series:
(You can click the image to see the entire post.)
When I was still considering whether to use the plugin, I said to my wife, "Wouldn’t it be great to know how to write a plugin?" On reflection, I realize this statement was another capsulization-a series of them, nested inside each other. "Know how to write a plugin" really means:
"Know how to write a plugin" really means "write a plugin that works…."
Which in turn means "write one that produces results…"
Which means "write one that people use to accomplish things that matter to them."
To me, this is an important distinction for workplace learning: You can learn on your own for your personal satisfaction, and if you’re satisfied, then that’s a sufficient result. In the workplace, though, you’re part of a larger group (even if that group is you and one individual client), and so the result has to matter within that context.
What’s this got to do with my plugin tinkering?
Think of it as my own workplace learning. At this point, I was still some distance from my (loosely articulated) end state. I hadn’t moved much toward my other CTQ of displaying a list of all the posts in a series. In fact, I didn’t yet grasp all the options in the plugin, let alone know how to make them work in a way useful to me.
About 5% of the info from the plugin's page of options
But…In my first 15 minutes with the plugin, I’d achieved a result that I found valuable. That left me more willing to experiment-which, put another way, says I was somewhat more willing to spend time trying to achieve the next valuable result.
To me, this is a core principle for any type of workplace learning: formal or informal, face-to-face or virtual. I need to be able to accomplish something that looks to me like real work-produce something that I see has having on-the-job value. And I need to do that sooner rather than later, which is why twenty minutes on introductions, half an hour on expectations for this workshop, and twenty minutes on learning objectives will invariably drive me to teeth-clenching frustration. Or to eating more of those lowest-bid-hotel pastries.
One of the unexpected outcomes of achieving an initial on-the-job goal is that you end up better able to visualize other goals. In a sense, learning leads to new problems (or opportunites) because you’re better at grasping the current situation and at visualizing different ones.
In the course of my experimenting with the Organize Series plugin, I did find at least one way to display a list of all the posts in a series. I can make a box like this appear alongside the title for each post:
The posts in my most complex series
You can click that image if you’d like to see the first post in the series, though I’ve turned this "series post list box" feature off for now, until I learn how to control the way it displays. Having managed to produce it, though, I’ve picked up several more goals for myself. I was about to write "learning goals," but I want to stress that they’re all tied to accomplishment.
I want to learn how to use code that’s part of the plugin to, for example, display a list of posts like the last example where and when I want it.
I want to find out how to modify the plugin’s template (the tool it uses to display the full text of all the posts in a series).
I may even want to learn how to modify the PHP or CSS code to make things happen.
That last is quite a goal for someone who doesn’t really know how to program. But my various experiments to date, and especially the things I see as successes, have taught me that I can learn to successfully modify small bits of PHP code and achieve relatively high-value results.
So I’m accomplishing what looks like real work to me.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
My grade school was St. Brigid’s, in northwest Detroit. The parish has been closed for 22 years, and I suppose the school closed before that. I remember getting half a day off school for Father Brennan’s feast day. I remember teachers like Sister Patrick Elizabeth and Sister Mary Eamon (Eamon, as in de Valera-the school had lots of green on St. Patrick’s Day).
More than anything, I remember my sixth-grade English teacher, Mr. Strunk. He was only the second teacher I’d had at St. Brigid’s who wasn’t a nun, and the only one who was male.
In hindsight, I suppose I didn’t have a mental model for what a male teacher would be like. I was disconcerted at first by how different he seemed. I need to say that I had some very good teachers: I don’t recall any of that whacking-with-rulers stuff that people seem to assume was mandatory in pre-Vatican II Catholic schools.
But Mr. Strunk was really different. He said things that were funny, wry, unexpected. He read to us from Mad Magazine-and may have been planting a crop of critical thinking with the seed-starter of parody. He went far beyond the stuffy borders of our textbook.
Early in the school year, when he’d said something funny, I responded with with a sarcastic laugh. (I suppose it was my ten-year-old’s critique: teachers weren’t supposed to be cracking wise.) He said, not harshly, "If you don’t think it’s funny, don’t laugh."
That was a door he opened just for me, but he spent a lot of time opening doors like it: "Think for yourself. You can do it."
He’d open them by assigning sixth graders a 1,500 word composition. Topic: The Dime. That was it; a two-word topic and a length. What can you do with that?
Another assignment: a 48-line poem. This time, he assigned the title: "The Last Voyage of The Albatross."
I don’t recall anything I wrote-but I have a vivid sense of enjoying the writing. I have an even more vivid sense of what he wrote on my paper, because it leapt into my memory and has never left:
Your poetry improves, my friend,
with each brand new endeavor.
I wish that I had words to lend
to serve you as a level.
But while such things as kings and men
on your mind’s sea do toss,
don’t let this be the last voyage
of your young Albatross.
School was never the same, and a few teachers after him suffered by comparison. I lost contact with him after going out of state for most of high school. In pre-Facebook days, it was hard to track down someone out of state; in post-Facebook days, it can still hard to connect with someone who was over 25 when John Kennedy was assassinated.
Through a friend of my younger brother’s, I learned last year that Mr. Strunk was still in the Detroit area; he spent 40 years teaching and coaching. The friend sent me an address, but warned me that his health was poor. I wrote a letter that week; I’d sealed it and stamped it, then realized he might not be up to a written reply. I reprinted the letter and included a phone number, on the outside chance that he might remember me and might be up to calling.
No such luck, but that was all right. The important thing for me was to say to him directly, more personally, the kinds of things I’ve talked about here.
I have not seen Mr. Strunk since, I suppose, 1963. Many of my classmates will remember one of his weekend gigs at the parish’s activities building: hosting a hootenanny (and that’s a word well on its way to joining "floppy disk" and "antimacassar" ) . One of his standards was The MTA Song - about a hapless Boston commuter who lacked the "exit fare" and so couldn’t pay to get off the train.
And did he ever return?
No, he never returned
And his fate is still unlearned.
He may ride forever
‘Neath the streets of Boston:
He’s the man who never returned.
For me, Mr. Strunk was the man who always returned. I decided to become a teacher in part because of his example. Even after leaving the education field, I would recall his intelligent encouragement, his genuine interest in his students, his respect for their intelligence that included challenging them.
I learned only today that Mr. Strunk died last month. One woman wrote in the funeral home’s online guestbook, "My all time favorite teacher and I will never forget how honored I felt when he told me to call him Frank."
It’d be hard to top that. I am grateful to be able to say "Mr. Strunk" and still feel his presence. I’ve read comments from people who were students in his final years of teaching, and from classmates of mine-we who were the first class he taught, more than 50 years ago. There are teachers I will always cherish-Brother Leo and Brother André, Father McKendrick and Dr. MacDonald, Professor Bauder — but there was only one Frank Strunk.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
Scotland’s most famous poet wasn’t much of a success by the age of 26. He’d farmed, but not successfully, though he has more success in sowing certain kinds of oats. Out of prospects, he’d accepted a job as a bookkeeper on a plantation in Jamaica… but didn’t have the money for the voyage.
His friend Gavin Hamilton, in whose memory I’ll have a little something this evening, suggested that Burns publish his poems "as a likely way of getting a little money to provide him… in necessaries for Jamaica."
Poems, Chiefly in the Scottish Dialect appeared in July of 1786. By September there was interest in a second edition. Within six months he was a celebrated artist. Jamaica was forgotten-until yet another of his loves, Agnes McLehose (known as Nancy to her friends), chose to rejoin her estranged husband… in Jamaica.
In a final letter before she left Scotland, Burns sent her the poem known as Ae Fond Kiss. It’s his birthday today; not a bad way to celebrate.
Ae fond kiss, and then we sever;
Ae fareweel, alas, for ever!
Deep in heart-wrung tears I’ll pledge thee,
Warring sighs and groans I’ll wage thee! (pledge)
Who shall say that Fortune grieves him
While the star of hope she leaves him?
Me, nae cheerfu’ twinkle lights me,
Dark despair around benights me.
I’ll ne’er blame my partial fancy;
Naething could resist my Nancy;
But to see her was to love her,
Love but her, and love for ever.
Had we never loved sae kindly,
Had we never loved sae blindly,
Never met—or never parted
We had ne’er been broken-hearted.
Fare thee weel, thou first and fairest!
Fare thee weel, thou best and dearest!
Thine be ilka joy and treasure,
Peace, enjoyment, love, and pleasure!
Ae fond kiss, and then we sever!
Ae fareweel, alas, for ever!
Deep in heart-wrung tears I’ll pledge thee,
Warring sighs and groans I’ll wage thee! (pledge)
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
I have mixed feelings about the word "curation." On the one hand, I acknowledge its spirit-what Clay Shirky means when he says, "Curation comes up when people realize it isn’t just about information seeking; it’s also about synchronizing a community."
Or what I think he means, because, let’s face it, there’s a certain lack of specificity to "Hey, Dad, watch me while I synchronize the community."
I don’t think of what I do as curation. I think of it as putting stuff aside because I think it might have value for me. In the olden days, when "bookmark" means something you slipped between the pages of a book, those things tended to go into file folders and bookshelves. Now, when content is (mainly) digital and storage is (virtually) free, they go into files.
To be honest, they tend to stay there, too. That isn’t the direction to take for things you want to learn, or learn from. So, once again, I’m profiting from the example of Harold Jarche, who for some time has made a habit of posting Friday Finds: weekly compilations of insights and observations that he’s captured on Twitter.
Via Kristina Halvorson (@halvorson), a link to Corey Vilhauer’s blog post, Building Confidence: The Hidden Content Deliverable. The ostensible topic is content strategy (which is what both Halvorson and Vilhauer really do), but anyone working in learning or workplace performance could read the post in that particular light as well. When we’re young and working as advisors, he says,
…We look down our nose. We assume our clients are dumb. The faster this goes away, the faster we can start doing the real work: understanding and embracing the needs of our clients and organizations.
From Yammer: The Blog, a post by Maria Ogneva, This is Not Your Parents’ Software Training. Nothing earthshaking, just a clear summary of alternatives to a bunch of same-time people in a bunch of same-time seats being told when and what to click.
From Vicki Davis (@coolcatteacher), a link to Brett McKay’s post, 4 Sites for Free Vintage Photos.
From Sweden, a 45-minute presentation at the Technical Communication UK Conference 2011 by Magnus Ohlsson and Jan Fredlund of IKEA’s communications group. The topic is how IKEA meets the challenge of 400 new sets of assembly instructions per year, plus revisions. The presentation comes via Mediasite, and the interface allows you to click through the slides; the audio will jump automatically to stay in sync. The first 18 minutes (slides 1-13) are background about the IKEA approach and the work of the group; starting at slide 14, there’s a more detailed look at what goes into the ubiquitous guides.
Via Pascal Venier, Graham Allcott’s new productivity rules of the road. Allcott’s business is helping people and organizations become more productive (warning: you’ll find Getting Things Done stuff). Among the thoughts that struck me-in part because you don’t often hear the relentlessly busy say things like these:
Starting well: beginning the day with meditation, exercise, a hearty breakfast, and "consuming limited information of my own choosing."
Going dark: from 9 till 1, Allcott shuts his internet connection off.
Making himself take lunch, and not work through it.
Via @Evernote, 10 useful tips from Brandie Kajino, their "organization ambassador."
…That’s the first installment of my shared keepers. You can think of them as having been curated if you want. Posting them here for me is my reworking/reprocessing of things. (I tossed a few others overboard-not everything labeled "keeper" merits being kept.)
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
I’m not a fan of catchy for the sake of catchy, which probably explains why "celebrity" is not a word that appeals to me. I am a fan of titles, invitations, or openings that are succinct, intriguing, and mnemonic.
One example comes in the first paragraph of Unhappy Meals, Michael Pollan’s January 2007 essay in The New York Times Magazine:
Eat food. Not too much. Mostly plants.
Definitely succinct. To me, intriguing-well, of course you should eat food. (Pollan advocates avoiding processed and manufactured food. He points out that produce doesn’t usually come with a label shouting "healthy!") As for mnemonic (in the sense of assisting memory), his three phrases epitomize the three main arguments in his essay.
I’ve written about weight management (here and here and here) and tried to explain effective, evidence-based approaches as a form of performance management. Perhaps that’s made me all the more receptive to an item in Obesity Panacea. Part of the PLoS (Public Library of Science) blog network, OP examines "the science (or lack thereof) behind popular weight loss products," as well as discussing other items related to weight.
The item? Can you limit your sitting and sleeping to just 23.5 hours a day?
Peter Janiszewski, who writes the blog along with Travis Saunders, highlights a video by Dr. Mike Evans of the Health Design Lab at the University of Toronto. Evans effectively poses his question in a succinct, intriguing way, and then offering a summary of evidence to support the treatment he recommends.
I find myself wondering how much practical information I could share like this, together with evidence, in less than 10 minutes. (Personally, I’d leave out the sketching-on-a-whiteboard-the images are engaging, but for me the sped-up drawing lost its charm quickly. That’s nitpicking, though.) In terms of mnemonic effect, the title and the recommendation definitely stay with me.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
On January 15, 2009… US Airways Flight 1549…experienced an almost complete loss of thrust in both engines after encountering a flock of birds and was subsequently ditched on the Hudson River about 8.5 miles from LaGuardia Airport (LGA), New York City… The flight… had departed LGA about 2 minutes before the in-flight event occurred.
That’s from the 200-page report (NTSB/AAR-10/03) issued by the National Transportation Safety Board. Among the reasons I’ve been reading the report is to learn more about the interplay between training, learning, performance support, and the environment in which this emergency took place.
The NTSB report cites four major factors contributing to the survival of all 150 passengers and 5 crew members:
The decisions and "crew resource management" of the flight crew
The airplane itself, which was equipped with forward slide/rafts although these were not required on this flight
The performance of the cabin crew in expediting the evacuation of the airplane
The proximity and rapid arrival of emergency responders
A quick timeline:
At 3:24 p.m. Eastern time, the tower cleared 1549 for takeoff.
At 3:25:51, the captain reported the plain was at 700 feet, climbing to 5,000.
At 3:27:10, "…the captain stated, ‘Birds.’ One second later, the CVR [cockpit voice recorder] recorded the sound of thumbs and thuds followed by a shuddering sound."
The report notes that the altitude was 2,818 feet and that engine speed started to decelerate.
At 3:27:23, the captain took over control of the plane from the first officer, telling him, "Get the QRH [quick reference handbook] loss of thrust on both engines."
Captain Chesley Sullenberger later reported that when he said this, First Officer Jeff Skiles already had the checklist out-showing how the two worked smoothly throughout the emergency.
At 3:27:50, the first officer began calling out steps in the Engine Dual Failure checklist.
At 3:29:11, the captain announced to the cabin, "Brace for impact."
At 3:30:41: the cockpit equipment broadcast "a 50-foot warning." The flight data recorder reported 33 feet.
From impact to ditching, about three and a half minutes.
Who Does What, and What Gets Done?
In an interview with Air and Space Smithsonian, Sullenberger discussed his collaboration with First Officer Jeff Skiles. Typically, he said, the first officer flies the plane, and the captain monitors. In this case, "even though Jeff was very experienced…[with] as much total flying experience" as Sullenberger, it was the first time Skiles had been on an Airbus A320 since training. So Sullenberger decided "we were best served by me using my greater experience in the [A320] to fly the airplane."
I also thought that since it had been almost a year since I had been through…recurrent training, and Jeff had just completed it…he was probably better suited to quickly knowing exactly which checklist would be most appropriate, and quickly finding it in this big multipage quick reference handbook that we carry in the cockpit.
Checklists and Focus
The NTSB report, in Appendix C, reprints the three-page Eng Dual Failure checklist. Skiles and Sullenberger lacked time to get through more than the first page. As it is, the checklist notes "optimal relight speed" [for the engines] is 300 nautical miles. Skiles at the time said, "We don’t have that." The report states that the maximum airspeed after the bird strike was 214 knots.
The checklist also assumes far more altitude than 1549 had. Step 3, on page 3 of the checklist, starts with what to do above an altitude of 3,000 feet.
Accidents and incidents have shown that pilots can become so fixated on an emergency or abnormal situation that routine items (for example, configuring for landing) are overlooked. For this reason, emergency and abnormal checklists often include reminders to pilots of items that may be forgotten. Additionally, pilots can lose their place in a checklist if they are required to alternate between various checklists or are distracted by other cockpit duties; however, as shown with the Engine Dual Failure checklist, combining checklists can result in lengthy procedures. [NTSB report, p. 92]
It seems clear to me that both captain and first officer believed that the engine-failure checklist was the best procedure to use. While there is a procedure (a checklist) for ditching the A320, 1549′s crew never got to use it. "Time would not allow it," Sullenberger said in the A&S interview. "The higher priority procedure to follow was for the loss of both engines. The ditching would have been far secondary to that."
Elsewhere the report notes that "low-altitude, dual-engine failure checklists are not readily available in the industry" — in other words, this is not limited to US Airways or to Airbus.
Adding to stress for the flight crew was an array of alarms and warnings. The ditching checklist, which they had no time to consult, included steps "to inhibit the ground proximity warning system and terrain alerts." In other words, since you know you’re ditching, you can shut these alarms off.
Training
According to the NTSB report, training at US Airways for dual-engine failure involves a full-flight simulator in which the failure occurs at 25,000 feet. No training scenarios involve "traffic pattern altitudes," which I take to mean "near airports." In addition, "dual-engine failure scenarios were not presented during recurrent training." A similar approach is true for Airbus’s training.
The outcome
Sullenberger, Skiles, and the cabin crew (Sheila Dail, Donna Dent, and Doreen Walsh, each with at least 26 years’ experience with the airline) worked together to save the lives of 150 passengers. Media reports tend to concentrate on the pilot’s actions, which were essential, since together with the first officer he was able to ditch the plane in a survivable manner.
The NTSB report notes that the accident "has been portrayed as a ‘successful’ ditching." It notes that the success "mostly resulted from a series of fortuitous circumstances" including these:
An experienced flight crew
Good visibility and calm water
Extended-over-water equipment (e.g., rafts) on the plane though not required for this flight
Nearness of vessels and responders available to rescue passengers and crew
Complex skills are…complex
I don’t have grand conclusions to put here. I do think that the Sullenberger interview, and the details in the NTSB report, provide more balance than many mass-media "miracle on the Hudson" reports. Clearly a success, in that everyone survived. The causes of that success, and how to increase the likelihood of similar success in the future, are much more complex.
For example: Sullenberger at one time was a glider pilot. A&S asked how that experience helped him. "I get asked that question…a lot," he said, "But that was so long ago, and those are so different from a modern jet airliner, I think the transfer [of experience] was not large."
For all of 1549′s crew-in the cockpit and in the cabin-performance resulted from experience, and experience was shaped not only through time in the air, but through regular training intended to focus on critical events, to provide feedback, and to increase the likelihood of success in critical, unpredictable situations.
Consider by way of contrast a large group of untrained people: only 77 passengers (just over half) evacuated with their seat cushions. This seemingly small element is a performance challenge: most passengers pay little attention to the safety briefing, and almost no one reads the safety card. The NTBS report suggests that those who took cushions did so because all preflight briefings point out that the cushion "may be used as a flotation device." In other words, some passengers were apparently habituated to that information and able to recall it when needed.
Life vests were not mentioned in the preflight safety briefing because 1549 was not an "extended overwater" flight. 19 passengers attempted to retrieve life vests from under their seats; only 3 "were persistent enough to eventually obtain the life vest." 30 others tried to put a vest on once outside the plane, but only 4 said they were able to do so properly.
Small, regular deposits
You’d be hard pressed to find a better summation of building your own expertise than the way Sullenberger expressed himself to Katie Couric of NBC News:
One way of looking at this might be that for 42 years, I’ve been making small, regular deposits in this bank of experience, education, and training. And on January 15 the balance was sufficient so that I could make a very large withdrawal.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:03pm</span>
|
Okay, I confess. Elmore Leonard does not have any advice for better training. Not that I know of. But a book review I read yesterday reminded me (as if that were necessary) how much I’ve enjoyed his writing.
I also enjoy the list of rules for writing he says he’s picked up along the way. An example:
5. Keep your exclamation points under control.
You are allowed no more than two or three per 100,000 words of prose.
One thing he strives for, he says, is "to remain invisible when I’m writing a book." So the rules are "to help me show rather than tell what’s taking place in a story."
That’s a pretty decent starting point to take if you’re creating something that’s meant to help other people learn. So I thought I’d see if you could adapt his rules to designing for learning in the workplace. Or to supporting learning. Or at least to keeping away from CEUs and the LMS.
1. Never open with "how to take this course."
Angry Birds is a software application in which you use launch suicidal birds via an imaginary slingshot to retaliate against green pigs who’ve stolen eggs and are hiding in improbable shelters. 12 million people have purchased Angry Birds in the past two years, none of them because of the "how to play Angry Birds" module.
Honestly, there are only two groups of people who look for "how to take this course." In the first group are those who designed the course, along with the lowest-ranking members of the client team. In the second group are folks who still have their Hall Monitor badge from junior high.
2. Never begin with an overview.
I can’t do any better than Elmore Leonard on this one:
[Prologues] can be annoying, especially a prologue following an introduction that comes after a foreword. But these are ordinarily found in nonfiction. A prologue in a novel is backstory, and you can drop it in anywhere you want.
Cathy Moore worked with Kinection to create the military decision-making scenario, Connect with Haji Kamal. If you haven’t seen it, click the link. It takes about 10 minutes. And notice: the overview on that first page is 17 words long.
3. Never use "we" when you mean "you."
Maybe I was a grunt for too long. Maybe I’m just contrary. But anytime I run across some elearning that’s yapping about "now we’re going to see," I think, "who’s this we?"
"We" is okay when you’re speaking in general about a group to which you and your intended audience both belong. But especially in virtual mode, it wears out quickly.
4. Don’t act like you’re the marketing department. Even if you are.
This is a first cousin to the "we" business. Once at Amtrak, a group of ticket clerks was learning a marketing-oriented approach to questions about our service. When a customer asks, "What time do you have trains to Chicago?" the proactive response is to fill in the formula, "We have ___ convenient departures at (time), (time), and (time)."
For several stations in my area, there was one train a day in each direction. From Detroit to Chicago at the time, there were two.
It’s not only bombastic to talk like this, it also confuses a feature with a benefit, a distinction any good salesperson would explain if marketing just asked. It doesn’t matter if you have sixteen departures a day if I, the customer, don’t find any of them convenient.
5. Keep your ENTHUSIASM under control!!
I suppose there’s less of this around than there used to be. I’m a staunch believer in the value of feedback. I believe just as firmly that feedback needs to be appropriate to the context. Shouting "That’s great!" for trivial performance mainly makes people feel like they’ve time-traveled back to an ineffective third grade.
6. Don’t say "obviously."
The thing about the obvious is: people recognize it. That’s why so few of us are surprised when we press the button for the fifth floor and the elevator eventually stops there. Except in a humorous tone (and remember, tragedy’s easy; comedy’s hard), words like "obviously" and "clearly" can sound maddeningly condescending.
7. Use technobabble sparingly.
When does tech talk become babble? When it doesn’t pertain to the people you’re talking with. If I’m discussing interface design with people who work in design-related areas, then "affordances" probably makes sense to them. But, for example, if in a post here on my Whiteboard I say that Finnish has features of both fusional and agglutinative languages, I can think of perhaps one frequent reader who has any idea what that means. Accurate as those terms might be for linguists, they’re a dead loss for a general audience.
8. Avoid detailed descriptions of things that don’t matter to people doing the work.
This goes with the backstory remarks above, but I’m also thinking of any number of computer-user training sessions I’ve seen. One GE executive told me that the typical Fortune 100 company has more than 50 mainframe-based computer systems, most of which don’t talk well to each other.
What does that have to do with training people to use them?
The typical worker could not possibly care less whether it’s a mainframe, whether it uses Linux, who built it, where it stores data. If he thinks about cloud computing at all (unlikely), he suspects the phrase is mostly puffery, the IT equivalent to "available at fine stores everywhere."
The introductory course at a federal agency dealing with pensions was stuffed to stupefaction with that sort of data-processing narcissism. What the participants in the course needed to know was: What’s my job, and how do I do it?
The answer is never "the QED Compounder pre-sorts input from the Hefting database in order to facilitate the Rigwelting process." Even if these things are technically true (and who could tell?), they’re meaningless.
Not to say that a quick summary of the process is worthless. It simply has to make sense:
"The Intake Group reviews personnel records from a new pension plan and makes sure they can go into our system so we can analyze them. Once the Intake Group finishes, we cross-check the new account in our system to uncover any conflicts with the data as it appeared in the original system. The team at the Resolution Desk handles the conflicts that can’t be fixed quickly."
9. Don’t go into great detail describing the wonderfulness of the business, the product, or the CEO.
As Bear Bryant said once about the motivational codswallop so beloved at alumni dinners, "People love to hear that shit. Winning inspires my boys."
Certainly you want people to recognize the good qualities-what makes the company and its product valuable to the customer (and thus to the shareholder). Since people in structured organizational learning already work for the outfit, they’ve already got plenty of information, and most likely an opinion, about its world-class, paradigm-shifting splendor.
10. Try to leave out the parts people don’t learn.
What don’t people learn?
They don’t learn what’s trivial (except to get through the unavoidable Jeopardy-style quiz). They don’t learn what doesn’t relate to their job (or to a job they’d like to have). They don’t learn what they don’t get a chance to practice. They don’t learn what they don’t need to learn because they already know where to look it up when they need to.
And they especially don’t learn what they knew before they got there.
If it sounds like "training," redo it.
Training, learning, performance — these are all variations on a theme. I believe if you talk too much about the process of how you train, or how you learn, people nod off quickly. This is especially true of the beloved rituals of the Stand-Up Instructor: icebreakers, going-around-the-room introductions, that creative nine-dot puzzle, and your expectations for this course.
I do think it’s good to find out what people want or hope or expect, but really: if this is a workshop on designing job aids, then assuming I could read the sign at the front, I’m not here for Assumptive-Close Selling for the INFP.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
(Click image for a downloadable version of the regulations on Scribd)
I have a collection of job aids, some going back more than 50 years. I keep them for various reasons: some are amusing, many are creative, and all of them are examples of helping people to perform some task.
What makes a job aid a job aid?
It presents information that’s external to the performer.
It’s used on the job. It’s part of how the performer carries out some task.
It enables accomplishment: when a person uses the job aid, he can accomplish some result that he couldn’t otherwise.
It reduces the need for memorization.
I use "memorization" here only as a label for some of what we call learning, which is storing certain knowledge in your memory so you can retrieve it and apply it in the proper context. (I know, I’m oversimplifying.)
Job aids, when used appropriately, offload some of the cost of memorization: instead of learning all the information or steps for some task, the person learns how to use the job aid to carry the task out.
Just as not every task is suited to job-aiding, not every person can use every job aid. A job aid supports the performance of a particular job, or at least the completion of a particular task. Implicit in that is a certain level of background knowledge and overall capability. If you don’t know much about photography and digital images, than a job aid for some advanced feature in PhotoShop probably was not designed for you.
Please understand that I mean no offense, and realize I’m making a possibly unjustified assumption, when I say that you, esteemed reader, probably couldn’t make good use of a job aid for a forequarter amputation.
That’s the surgical removal of someone’s arm and shoulder. It’s not a common procedure. In the UK, doctors perform about 10 per year, mainly on cancer patients.
David Nott performed one, too. He’s a vascular surgeon who volunteers with Médecins Sans Frontières (Doctors Without Borders). In 2008, while serving in the Democratic Republic of Congo, he was confronted with a boy who’d lost most of his arm. The child was at grave risk of dying; Nott knew the only procedure that could help him was a forequarter amputation.
As the BBC reported, Nott believed he had the surgical skill but wasn’t sure he knew all the steps for this specific operation. There was no way to get real-time support (say, over an open phone line). So he contacted Dr. Meirion Thomas of London’s Royal Marsden Hospital, who provided performance support… via text message.
Click to view video on The Telegraph's site
Notice how Thomas’s instructions rely on skill and knowledge that Nott already had. "Cont(r)ol and divide (the) subsc(apular) art(ery) and vein" is one highly compressed step. Professor Thomas knew that the intended performer could get around the typo, could identify the vessels, and would know the meaning of "control" and of "divide."
Nott told The Telegraph he was able to carry out the three-hour procedure thanks to the guidance, which is one of the truly distinctive job aids in my collection.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
I’m experimenting with changes to this blog. Mostly they have to do with the theme. If you’re not a blogger, that’s the collection of WordPress files that controls the appearance of the blog-not just the arrangement of colors and typefaces, but also the display and position of features like those you see in the sidebar ("latest posts," "latest comments," and so on).
I really liked my longtime theme, Simpla. The white space went well with the blog’s name. But it hasn’t been updated in a long time, and it’s not widget-aware.
For the three people still reading, a widget is a little drag-and-drop control. For example, with the old theme, to have drop-down list for displaying archives by month, I had to edit the PHP code for the sidebar and add this:
<<id="archives"><h2><?php _e('The last few months'); ?></h2>
<ul>
<?php wp_get_archives('type=monthly&limit=6'); ?>
<!-- END ARCHIVES -->
<!-- Archive dropdown -->
<h4>Or any month at all:</h4>
<select name="archive-dropdown"
onChange='document.location.href=this.options[this.selectedIndex].value;'>
<option value=""><?php echo attribute_escape(__('Select Month')); ?></option>
<?php wp_get_archives('type=monthly&format=option&show_post_count=1'); ?> </select>
</ul>
Nothing to it.
(click to enlarge)
Using more up-to-date themes (like Suffusion, which I’m experimenting with), you can add or delete widgets without having to worry about forgetting an angle bracket or a semicolon.
The example on the right is taken from this blog as I write. Suffusion allows for multiple sidebars-the spaces outside of the main post area. I dragged five different widgets into Sidebar 1; the order in which I place them is the order in which they appear.
That example includes the Archives widget, which I left open to show how easy it is to customize the title and to say whether you want the archive as a full list, or as a dropdown. Since I’ve been scribbling on this Whiteboard for more than 5 years, I didn’t think the full list was the best option.
The Series TOC widget (second from last in the example) is another benefit I get from a more up-to-date theme. I’ve written several post series, and the widget automatically displays titles for the first post in each one. When I begin another series, I don’t have to do anything to update that table-of-contents; as long as I’ve named the series (through another WordPress gizmo called a plugin), the widget will include the new title.
As I tinker with changes, cosmetic or (I hope) substantive, I’ll probably make mistakes, which is why I say things will be up in the err for a while. And I do have some grunt work to do-that random quote, which I’m fond of, can’t retrieve the 300-odd quotes stored in the database. It looks like I may have to re-enter them one at a time.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
Reference job aid is a term I use for any job aid that collects or lays out information so that someone can look up a meaning, decode an example, or perform other kinds of work with facts.
(Over the next few days, I’ll post several examples of real-world job aids. This is the first one.)
The image below and its accompanying table of callouts are taken from the Institution Rules and Regulations for the former United States penitentiary on Alcatraz Island, California. As the regulations make clear, an inmate was entitled to food, clothing, shelter, and medical care. Anything else was a privilege and could be revoked.
Who used this job aid?
My guess is: guards, to explain to inmates how their cells were to be organized, and to make certain that cells conformed to the rules. Also, possibly, the inmates themselves, though I have a suspicion it would be more to justify some claim: "Hey, I’m allowed to have up to twelve books."
What was the task it supported?
Most likely, it was a reference for what can someone have in his cell? What is he not allowed to have? (In the latter case, if an item is not pictured here, it’s not permitted. This is one way to for you to be certain that Robert Stroud, despite the title of a movie, never kept birds while at Alcatraz. Apparently as a title The Birdman of Leavenworth didn’t sound as striking.)
(Click to enlarge)
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
If you’re wondering whether you should build a job aid to support some task, this is the first of a three-part guide to help you figure things out.
That first consideration ("Is a job aid required?") isn’t as daft as it might seem. If your organization mandates a job aid for some task, then you’re stuck. You want to do the best job you can with it (or maybe you don’t), but unless you convince the right people to reverse the policy, somebody’s going to be building a job aid.
Which means you can skip the rest of the "should I build?" stuff that will appear in Parts 2 and 3.
Assuming that a job aid isn’t mandatory, the next question is whether speed or rate is a critical factor in performing whatever the task is. The short answer is that if speed matters, a job aid isn’t going to work.
First, when it comes to routinely high-volume work like factory production or air-traffic control, that normal high-volume state doesn’t allow the performer time to consult a job aid. Successful results depend on learning-on committing skill and knowledge to memory, and on retrieving and applying those things appropriately.
I’m a pretty fast typist (65 - 80 words per minute if I’ve been writing a lot), but the moment I glance down at the keyboard my rate drops, because the visual signal interferes with the virtually automatic, high-rate process I normally use at a keyboard.
That’s rate. As for speed, many jobs call for you to apply knowledge and skill in an unscheduled fashion, but quickly. Think about safely driving a car through a tricky situation, much less an emergency. You don’t have the opportunity to consult a job aid. If a kid on a bike suddenly pulls out in front of you, you can’t look up what to do.
Anyone who’s helped train a new driver knows what it’s like when the novice is trying to decide if it’s safe to turn into traffic. We experienced drivers have internalized all sorts of data to help us decide without thinking, "Yes, there’s plenty of time before that bus gets here; I can make the left turn." In the moment, the newcomer doesn’t have that fluency but has to be guided toward it-just not via a job aid.
What’s next?
Once you’ve determined that you’re not required to build a job aid, and that there’s no obstacle posed by a need for high speed or high rate, you’ll look at the nature of the performance for clues that suggest job aids. That’ll be the next post: Ask the Task.
CC-licensed image of seabirds by Paul Scott.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
Example of a deployed fire shelter
Flowcharts, along with their decision-table siblings, guide a person through choices, evaluations, or decisions. As an example of a flowchart, I’m using inspection guidelines for a personal fire shelter. The guidelines come from the USDA Forest Service website (specifically, Fire Shelter Inspection Guide and Rebag Direction).
A fire shelter is a last-ditch, personal-protection device, meant to radiate heat away from a firefighter who’s been trapped by a fire. The shelter’s pup-tent shape encloses air for the fighter to breath as the fire passes over.
The photo on the right is taken from page 16 of The New Generation Fire Shelter, a 2003 publication. Although the text doesn’t say so explicitly, a portion of the shelter appears to have been cut away so you can see how the firefighter lies within it after it’s deployed. (There are hand straps to hold the shelter down.)
Firefighters receive a fire shelter as part of their equipment, and one of their responsibilities is to inspect it regularly. That’s what the guidelines are for.
A fire shelter in its bag
The second photo shows what a fire shelter looks like in its bag. Firefighters leave the bag closed until they have to deploy it, which explains the need to inspect the bag regularly.
There’s more to the guide than appears below; I’m just highlighting its flowchart. Which is a good excuse for me to point out that most job aids are combinations of techniques-for example, step-by-step instructions (a cookbook) combined with decision guidance (like a flowchart).
Who uses this job aid?
A Forest Service firefighter or a person with similar responsibilities. While you could use this to help inspect any fire shelter, the language in the guide implies that you’re inspecting your own.
What’s the task being guided?
Determining whether a fire shelter has any defects that would render it unsafe.
Notice the number of decisions involved:
Is there moisture in the bag?
What’s the status of the bag itself?
Are there holes? How many, and what size?
Does it have a label with a red R?
Does it have a yellow rebag label?
I want to emphasize, because of the nature of the task, that the full guide has a number of photo examples (e.g., this is what a label with a red R looks like).
(See full guide at the U.S. Forest Service site)
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
The previous post in this series covered the initial go/no-go decisions: are you required to build a job aid? Does a need for rate or speed make a job aid impractical?
If the answer in both cases is no, then you don’t have to build a job aid, yet there’s no reason not to (so far). A good way forward at this point is to consider the characteristics of the real-world performance you have in mind. This is related to though not the same as task analysis. I have my own name for it:
What that means is: use what you know about the task to help determine whether building a job aid makes sense. You can go about this in many ways, but the following questions quickly cover a lot of the territory.
♦ ♦ ♦
How often does someone perform the task?
"Often" is a relative term-in fact, most of the questions in Ask the Task are relative. That doesn’t mean they’re not pertinent. Asking "how frequent is frequent?" turns your attention to the context of the task and the people who typically carry it out.
Frequency isn’t the same thing as regularity. Some tasks are frequent and predictable, like a weekly status update. Some are more random, like handling a payment by money order. And some are much more rare, like a bank teller in Vermont handling a money transfer from Indonesia.
Whether you end up building a job aid, designing training, or just tossing people into the deep end of the performance pool, you need some idea of how frequent "frequent" is, and where the specific task might fall along a job-relevant frequency scale.
Think about what frequency might tell you about whether to build a job aid. Yes, now. I’ll tell you more at the end of the post, but we both know you ought to do some thinking on your own, even if we both suspect few other people will actually do that thinking while they read this.
♦ ♦ ♦
How many steps does the task have?
It’s true, some tasks don’t really seem to have steps. Or they have very few: look up the arguments for the HTML <br> tag. And some tasks have so many that it might make sense to break them up into logical subgroups: setting up the thermoformer. Testing the thermoformer. Troubleshooting problems after the test.
Think of "step" as the lowest level of activity that produces a result that makes sense to the performer on the job. If I’m familiar with creating websites, then "create a new domain and assign it to a new folder in the \public_html directory" might be two steps (or maybe even one). If I’m not familiar with creating websites, I’m going to need a lot more steps.
That makes sense, because a job aid is meant to guide a particular group of performers, and the presumption is that they share some background. If you have widely differing backgrounds, you might end up with two versions of a job aid-see the Famous 5-Minute Install for WordPress and the more detailed instructions. Essentially, that’s two job aids: one for newcomers (typically with more support) and one for more experienced people.
As with frequency, you need to think about how many steps the task involves, and whether you think of those as relative few steps, or relatively many.
♦ ♦ ♦
How difficult are the steps?
You can probably imagine tasks that have a lot of steps but not much complexity. For someone who’s used to writing and who has solid, basic word processing skills, writing a 25-page report has plenty of steps, but few of them are difficult (other than getting reviewers to finish their work on time).
In the same way, a task can have relatively few steps, but many of them can be quite difficult.
That’s the reason for two step-related considerations when you Ask the Task whether a job aid makes sense: how many? How hard?
Pause for a moment and think which way you’d lean: if the steps in a task are difficult, does that mean "job aid might work," or does that mean "people need to learn this?"
♦ ♦ ♦
What happens if they do it wrong?
This question focuses on the consequences of performing the task incorrectly. Whether a person has a job aid or not is immaterial-if you don’t perform correctly, what happens? Personal injury? Costly waste or rework? Half an hour spent re-entering the set-up tolerances? Or simply "re-enter the password?"
As with the other questions, you need to think about the impart of error in terms of the specific job. And, if you haven’t guessed already, about the relationship between that impact and the value of building a job aid.
♦ ♦ ♦
Is the task likely to change?
We’re not talking about whether the job aid will change, because we still haven’t figured out if we’re going to build one. We’re talking about the task that a job aid might guide. What are the odds the task will change? "Change" here could include new steps, new standards, new equipment, a new product, and so on.
♦ ♦ ♦
Ask the task, and the job aid comes out? Right!
You’ve probably detected a pattern to the questions. So the big secret is this:
The more your answers tend to the right, the stronger the case for a job aid.
What follows is the 90-second version of why. (As your read the lines, just add "all other things being equal" to each of them.)
The less frequently someone performs a task, the likelier it is that he’ll forget how to do it. If you’re an independent insurance agent whose practice mostly involves homeowner’s and driver’s insurance, and you write maybe six flood insurance policies a year, odds are that’s not a task you can perform without support. Job aids don’t forget.
The more steps involved in the task, the more challenging it will be for someone to retain all those steps correctly in memory and apply them at the right time. Job aids: good at retention.
The more difficult the steps are, the harder the performer will find it to complete each step appropriately. A job aid can remind the performer of criteria and considerations, and even present examples.
The higher the impact of error, the more important it is for the performer to do the task correctly. You certainly can train people to respond in such circumstances (air traffic control, emergency medical response, power-line maintenance) , but often that’s when the performance situation or the time requirement presses for such learning. Otherwise, a well-designed job aid is a good way to help the performer avoid high-cost error.
The more changeable the task, the less sense it makes to train to memory. Mostly that’s because when the change occurs, you’ll have to redo or otherwise work at altering how people perform. If instead you support the likely-to-change task with job aids, you’ve avoiding the additional cost of full training, and you mainly need to replace the outdated job aid with the new one.
Here are the ask-the-task questions, together once more:
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
The Scrooge-O-Meter from LSS Financial Counseling Service is an example of a calculator job aid. Calculators guide someone through a task by prompting for numerical values and performing calculations. The idea is to help a person reach some conclusion without having to master the factors or the math involved.
(LSS Financial Counseling Service is part of the work of Lutheran Social Service of Minnesota.)
Who uses this job aid?
Most likely someone trying to learn the added financial burden of buying on credit. (See additional thoughts from the group that created it, later in this post.)
What is the task supported?
I would say "awareness" or even "empowerment." The goal is to help someone understand the additional cost of purchasing on credit. I filled in the numbers you see in this example. The result says to me that "spreading out" credit payments for my holiday buying makes those purchases nearly 10% more expensive than I’d thought.
Notice that it doesn’t render judgment ("$68.72 extra? Are you nuts?!?"). The job aid simplifies the process so I can more readily see and understand the impact of buying on credit. I’m free to make my own decisions about what to do next.
More about the Scrooge-O-Meter
LSS Financial Counseling Service wants consumers to know that they can turn to a national network of nonprofit financial counseling and debt management (FCS is a member of that network). The page with the Scrooge-O-Meter offers a toll-free number, online counseling, a newsletter, and other resources.
Darryl Dahlheimer, program director of LSS Financial Counseling Service, was kind enough agree to its appearing here and also to provide these details:
There are many tools to help consumers calculate credit card repayment, but here are three reasons we like this one:
It sets a playful tone, to overcome the shame/intimidation of finances for so many who feel "dumb about money" but want to learn.
It helps make the true cost of using credit visible. Plug in an example of buying that $500 iPad at a major store on their 21% interest credit card and then paying only the $15 minimum each month. You will pay a whopping $757 and take over four years to pay off.
Conversely, it allows you to see the tangible benefits of paying more than minimums.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
The first two parts of this series, in one line each:
Is a job aid mandatory? If not, does speed or rate on the job prohibit the use of a a job aid?
Do the characteristics of the task tell you that a job aid makes sense?
If they do, you might feel ready to leap right into design. But in the real world, people don’t just perform a task; they work within a complex environment. So the third part of your decision is to ask if any obstacles in that environment will hamper the use of a job aid.
You could ask these question in either order, but physical barriers are sometimes easier to address than social ones.
Often people have to work in settings where a job aid might be a hindrance or even a danger. Someone repairing high-tension electrical lines, for example. Or someone assembling or disassembling freight trains at a classification yard:
You don’t need to watch this video about humping railroad cars, but as the narrator points out around the 4:00 mark, in the distant past a worker would have to ride each car as gravity moved it down a manmade hill (the hump), applying the brake by hand if the car was moving faster than about 4 mph. It would have been impossible to give the brakeman a job aid for slowing the car, so his training (formal or otherwise) would have required lots of practice and feedback about judging speed. And possible trial and error.
Texas highway map, 1936
Rather than develop impractical job aids for aspects of this set of tasks, modern railroads rely on computers to perform many of them. For example, radar monitors the speed of cars more accurately than a person could, and trackside retarders act to moderate that speed.
Remember, the goal is not to use job aids; the goal is to produce better on-the-job results. Sometimes you can do that by assigning difficult or repetitive tasks to machinery and automation.
In many cases, though, you can overcome physical obstacles to the use of a job aid by changing its form. No law requires a job aid to be on an 8 1/2 by 11 inch laminated piece of paper. Nor on the formerly ubiquitous, multifolded paper of a highway map.
A road map can support different kinds of tasks. You can use it at a table to plan where you’re going to go, to learn about the routes. No barriers to such use. But for a person who’s driving alone, a paper road map is at best a sub-optimal support. It’s hard to use the map while trying to drive through an unfamiliar area.
Deep in the heart of Oslo
Real-time support for the driver now includes geosynchronous satellites, wireless technology, a constantly updated computer display-and a voice.
That voice is transformative: it’s a job aid you don’t have to read. Because the GPS gives timely, audible directions, there’s no need to take your eyes off the road and decipher the screen.
Other examples of overcoming physical barriers: attach the job aid to equipment. Use visual cues, like a change of color as movement or adjustment gets closer to specification. Combine audio with voice-response technology ("If the relay is intact, say ‘okay.’ If the relay is damaged, say ‘damaged.’")
But he had to look it up!
Overcoming physical barriers is one thing. Overcoming social barriers is…a whole bunch of things. Your job aid will fail if the intended performer won’t use it.
Popular culture places a great value on appearing to know things. When someone turns to an external reference, we sometimes have an irrational feeling that she doesn’t know what she’s doing-and that she should. In part, I think we’re mistaking retention of isolated facts with deep knowledge, and we think (reasonably enough) that deep knowledge is good.
At its worst, though, this becomes the workplace equivalent of Trivial Pursuit. A railroading example might be someone who can tell you not only the train numbers but the locomotive numbers that ran on a certain line decades ago-but who can’t issue you a ticket in a prompt, accurate, courteous manner.
The performer herself may be the person believing that performance guided by a job aid is somehow inferior. Coworkers may hold it, putting pressure on the individual. Even clients or other stakeholders may prefer not to see the performer using a job aid.
Maybe there’s a way around this bias. The job aid could be embedded in a tool or application, such that the performer is merely applying one feature. That’s essentially what a software wizard does. Watch me turn this data into a chart-I just choose what I want as I go along.
(And doesn’t "choose what I want" sound much more on top of things than "look stuff up?")
For a injection gun used for immunizations in third-world settings, healthcare workers occasionally had to make adjustments to clear jams and similar equipment glitches. Some senior workers did not want to seem to need outside help to maintain their equipment, but couldn’t retain all the steps. (Remember in Part 2? Number of steps in task, complexity of steps?) So the clearing instructions were attached to the equipment in such a way that the worker could follow the job aid while clearing the gun.
♦ ♦ ♦
The considerations here aren’t meant as either exhaustive or exclusive. They are, however, important stops to make, a kind of reality check before you hit the on-ramp to job aid design. The reason for building a job aid is to guide performance on the job while reducing the need for memorization, in order to achieve a worthwhile result. If the performer can’t use it because of physical obstacles, or won’t use it because of social ones, the result will be… no result.
CC-licensed photos:
1936 Texas highway map by Justin Cozart.
Norwegian GPS by Stig Andersen.
1879 Michigan Central RR timetable from the David Rumsey Map Collection.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
In the current issue of Smithsonian magazine, Teller (of the professional duo, Penn & Teller) reveals some secrets of his art.
First he talks about the world of neuroscience and perception, into which he’s often invited as a speaker. And he makes the point that when it comes to experimenting with human perception, neuroscientists are amateurs compared with magicians.
I recall his partner Penn Gillette saying once that they were not magicians. They were tricksters, swindlers. His point was that nothing in their act was magical. They’re not exempt from the laws of physics. Instead, as magicians have done for thousands of years, they rely on trickery, on quirks of perceptions.
It’s well worth reading the original (link in the first paragraph, above) to enjoy Teller’s style and to take in the details he provides for points like these:
Exploit pattern recognition. Our brains constantly seek patterns, especially when there isn’t one. That’s why the night sky has constellations, but an evenly spaced series of dots seems to have no pattern at all.
Distract with laughter. What Teller’s really talking about here is a kind of cognitive overload-if you’re watching the performance and laughing at the comedy, you’re likelier to miss some small detail. I think the same thing applies when a training exercise is sufficiently engrossing-people don’t care as much about elegant presentation and high-end graphics if the exercises feels like interesting, useful work.
Nothing fools you better than the lie you tell yourself. Here, he’s talking about allowing the audience (or the learner) to reach their own conclusions, make their own judgments, even if as the "designer" he knows these will be erroneous. For a magic act, that means the audience is all the more mystified by the effect-thus, success. When it comes to learning, the learner is comparing a conclusion she arrived at with new data that conflicts with that conclusion. That, gentle reader, is where the learning starts.
He goes on; you don’t need me to repeat it here. I found the article engaging enough that I wanted to see more, and came across a 2008 article in Nature Reviews - Neuroscience. In Attention and awareness in stage magic: turning tricks into research, Teller and several coauthors study magic tricks so that "neuroscientists can learn powerful methods to manipulate attention and awareness in the lab."
If you’re doubtful, take a look at this demonstration by one of the coauthors, pickpocket Apollo Robbins.
I think it’s worth the 16 minutes. Watch carefully during the first two-thirds, when (I’m not giving away much here) Robbins actually picks the pockets of a volunteer who’s pretty sure that’s what’s going to happen. You’ll find the subsequent explanation all the more compelling.
"If I’m here (standing alongside the mark), and I want to split his attention… I’ll bring my chin up into his personal space. His head will whip up to my face, and he won’t focus on that movement (of my hands)."
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:02pm</span>
|
The Nuremberg Funnel, according to Wikipedia, is a humorous expression for a kind of teaching and learning. It implies knowledge simply flowing effortlessly into your brain as you encounter it-or else a teacher cramming stuff in the mind of a dullard.
(The term dates to at least 15th-century Germany, and I suspect the notion of funneling or otherwise stuffing knowledge into someone is a few months older than that.)
The Nurnberg Funnel is humorous as well, in a slightly drier way. John M. Carroll’s 1990 book, subtitled Designing Minimalist Instruction for Practical Computer Skill, describes efforts to help people learn to use computers and software. In 1981, Carroll and his colleagues analyzed problems that people had learning then-new technology like the IBM Displaywriter and the Apple Lisa.
In one extended experiment, Carroll and his colleagues had volunteers work with the Lisa, its owners guide, and the documentation for LisaProject. The goal was to find out what interested but untrained users actually did with these materials.
Mostly what they did was struggle.
On average, the learners took three times the half hour estimated by Apple and enthusiastic trade journals-just to complete the online tutorial. "Two [learners] who routinely spent more than half of their work time using computers… failed to get to our LisaProject learning task at all."
Carroll calls into question what he refers to as the systematic or systems approach to user training. To him this means "a fine-grained decomposition of target skills" used to derive an instructional sequence: you practice the simple stuff before you go on to more complex tasks they contribute to.
Carroll believes that "the systems approach to instructional design has nothing in common with general systems theory." What’s worse is that in the workplace, the highly structured step-by-step approach just doesn’t work.
If only people would cooperate! But they don’t.
The problem is not that people cannot follow simple steps; it is that they do not… People are situated in a world more real to them than a series of steps… People are always already trying things out, thinking things through, trying to relate what they already know to what is going on…
In a word, they are too busy learning to make much use of the instruction.
(that emphasis is Carroll’s, not mine — DF)
After further experiments, Carroll and his colleagues created what they called the Minimal Manual. Earlier they’d made up a deck of large cards "intended to suggest goals and activities" for learners, and useful as quick-reference during self-chosen activity. In chapter 6 of The Nurnberg Funnel, he describes the next stage-a self-instruction manual designed on the same minimalist model.
Training on real tasks
The Minimal Manual used titles like "Typing Something" or "Printing Something on Paper" rather than suboptimal, system-centric ones in the original Displaywriter materials. Carroll’s materials also eliminated material that was not task oriented-like the entire chapter entitled "Using Display Information While Viewing a Document."
At the same time, the experiment included essential material not well covered in the original document. It was easy for learners to accidentally add blank lines but difficult for them to get rid of them. The Minimal Manual turned this into a goal-focused task that made sense to the learner: "Deleting Blank Lines." While not catchy, that title’s a big improvement on "how to remove a carrier return control character."
Getting started fast
In the Minimal Manual the learner switches on the system and begins the hands-on portion of instruction after four pages of introduction. In the systems-style instruction manual, hands-on training begins after 28 pages of instruction.
Learners created their first document only seven pages into the Minimal Manual…. In the commercial manual, the creation of a first document was delayed until page 70.
Carroll shows several ways in which the comprehensive systems-style manual bogs down, overloads the learner, and gets in the way of doing anything that seems like real work. I can remember endless how-to-use-your-computer courses that spent 45 minutes on file structure and hierarchy before the target audience had ever created a document that needed to be saved. This is like studying the house numbering scheme for a city before learning how to get to your new job.
Reasoning and improvising
The Minimal Manual approach included "On Your Own" work projects-for example, make up a document and compose the text yourself. Then try inserting, deleting, and replacing text.
Some explanation is always necessary, but the minimalist approach kept that to… a minimum. "The Displaywriter stores blank lines as carrier return characters." That’s it. You don’t really have to know what a carrier return character is-what’s important to you as a user is (a) it’s what creates blank lines, and (b) if you delete it, you delete the blank line.
In general, this approach introduced a procedure only once. The three-page chapter "Printing Something on Paper" was the only place that printing was explained. Elsewhere, exercises simply told the learner to print. If he wasn’t sure how, he’d have to go back to that chapter.
In part, the team chose this approach because of the endless and often fruitless searching that learners had done in earlier trials, losing themselves in thickets of manuals and documents. The fewer pages you have and the clearer their titles, the easier it is to find what you’re looking for.
Here’s the entire explanation for the cursor control keys:
Moving the cursor
The four cursor-movement keys have arrows on them (they are located on the right of the keyboard).
Press the ↓ cursor key several times and watch the cursor move down the screen.
The ↑, ←, and → keys work analogously. Try them and see.
If you move the cursor all the way to the bottom of the screen, or all the way to the right, the display "shifts" so that you can see more of your document. By moving the cursor all the way up and to the left, you can bring the document back to where it started.
Connecting the training to the system
Carroll’s subhead here is actually "Coordinating System and Training," but I wanted to be more direct. His team deliberately used indirect references in order to encourage learners to pay attention to the system they were learning. In those long-ago days, for example, computers had two floppy-disk drives. The Minimal Manual didn’t tell learners which drive to put a diskette in. "We left it to the learner to consult the system prompts."
Supporting error recognition and recovery
As with other parts of the experiment, Carroll and his colleagues used error information from previous testing to guide the support provided by the Minimal Manual. Multi-key combinations (hold down one key while pressing another) baffled many learners, especially when the labels on the keys were meaningless to them: ("press BKSP, then CODE + CANCL"). And then there was this:
A complication of the Code coordination error is that the recovery for pressing Cancel without holding the Code key is pressing Cancel while holding the Code key.
Good thing we never see anything like that any more, huh?
Exploiting prior knowledge
It’s easy to forget how confusing word processing can be-at least till you try learning some new application for which you have very little background. (I’ve taken a stab at learning JavaScript, and I can see that’s probably not the basis of my next career.) The Minimal Manual strove to counter the relentless, technocratic, system-centric thinking in the original. "The impersonal term ‘the system’ was replaced by the proper name…the Displaywriter."
I can hear IT people I’ve worked with sniffing "so what?" I’ve actually had a programmer say to me, of a useful but very complicated tool, "If they can’t understand this, they don’t deserve it."
One particularly useful approach: document names. Back when most white-collar work did not involve computers, people created paper documents all the time, but rarely thought of documents as requiring a name. (What’s the name of a letter? What’s the name of a memo?) So the bland instruction "Name your document" seems like one more small technical obstacle in the way of getting something useful done.
Carroll’s team had learned that naming created lots of problems for learners, and so found a way to ease learning of this unfamiliar concept.
In the terminology of the Displaywriter you will be "creating a document" — that is, typing a brief letter. You will first name the document, as you might make up a name for a baby before it is actually born. Then you will assign the document to a work diskette — this is where the document will be stored by the Displaywriter. And then, finally, you will type the document at the keyboard, and see the text appear on the screen.
It might still feel odd to have to name a document, but the baby analogy brings the idea a bit closer to what the average person already knows.
♦ ♦ ♦
There’s a great deal more in chapter 6 that I’ll have to return to in another post. I wanted to share what’s here, though, because I think it’s extremely relevant to the future of learning at work.
That omnipresent quotation from a movie puppet often exasperates me.
Of course there’s try-in fact, it’s the effort involve in genuinely trying that’s essential. Otherwise, no Jedi training and not much need for a master; Yoda could just take a seat behind Statler and Waldorf.
Trying and succeeding leads to conclusions that may or may not be correct-sometimes they’re simplistic, sometimes they’re downright erroneous. Trying and falling short, in an environment where such trying is encouraged, can lead to analysis, to greater awareness of the available steps, inputs, and tools, and to improved performance.
The bigger lesson, I am more and more convinced, is that comprehensive systems training is a myth. People might spend extended time in formal classes, or labor their way through highly structured text or tutorials, but most of the time they’re looking for how to accomplish something that seems valuable to them. Just tell me how to get these images posted. Let me create a series of blog posts that have automatic navigation. How can I search this mass of data to find things that are X, Y, and Z, but not Q?
As I put it in a different context (vendor-managed inventory), I don’t want to know about standard deviation. I want to know whether the grocery warehouse computer’s going to order more mayonnaise-and how to tell it not to, if that’s what I think is best.
In no way am I saying that analysis doesn’t matter. It matters a lot-witness the skillful observation and analysis of user testing that led Carroll and his associates to the Minimal Manual. That for them was a starting point-they examined data from their testing to gain further insight and to guide decisions about supporting learning.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:01pm</span>
|
(This is a continuation of a previous post based on John M. Carroll’s The Nurnberg Funnel)
The main elements in the Minimal Manual test-a task-centric approach to training people in using computer software-were lean documentation, guided exploration, and realistic exercises. So the first document that learners created was a letter. In earlier, off-the-shelf training, the first task had been typing a description of word processing, "something unlikely to be typed at work except by a document processing training designer."
This sort of meta-exercise is very common, and I think almost always counterproductive. Just as with Amtrak’s training trains that (as I said here) didn’t go over real routes, trivial tasks distract, frustrate, or confuse learners. They don’t take you anyplace you wanted to go.
Not that the practice exercise needs to look exactly like what someone does at his so-called real job; the task simply needs to be believable in terms of the work that someone wants to get done.
Into the pool
After creating the Minimal Manual, Carroll’s team created the Typing Pool test. They hired participants from a temp agency and put them in a simulated office environment, complete with partitions, ringing phones, and office equipment. These people were experienced office workers with little prior computer knowledge. (Remember, this was in the 1980s; computer skills were comparatively rare. And Carroll was testing ways to train people to use computer applications.)
(Click to enlarge.)
Each group of two or three participants was given either the Minimal Manual (MM) or the systems style instruction manual (SM). Participants read and follow the training exercises in their manuals and periodically received performance tasks, each related to particular training topics. (You can see the task list by enlarging the image on the right.)
Some topics were beyond the scope of either the MM or the SM; interested participants could use an additional self instruction manual or any document in the system reference library.
After finishing the required portion of training material, participants took the relevant performance test. They were allowed to use any of the training material, the reference library. They could even call a simulated help line. This last resource had an expert on the system who was familiar with the help line concept but unaware of the goals of the study.
So what happened? Carroll provides a great deal of detail; I’ll summarize what seem to me to be the most important points.
Minimal learning was faster learning.
In all, the MM participants used 40% less learning time then the SM participants — 10 hours versus 16.4. ("Learning time" refers to time spent with either the MM or SM materials, not including time spent on the performance tasks.) This was true both for the basic tasks (1 through 3 on the list) and the advanced wants.
In addition, the MM group completed 2.7 times as many subtasks, as the SM group. One reason was that some SM participants ran out of time and were unable to try some of the advanced tasks. Even for those tasks that both groups completed, the MM group outperformed by 50%.
We were particularly satisfied with the result that the MM learners continued to outperform their SM counterparts for relatively advanced topics that both groups studied in the common manual. This indicates that MM is not merely Wiccan dirty for getting started… Rather, we find MM better then SM in every significant sense and with no apparent trade-offs. The Minimal Manual seem to help participants learn how to learn.
In the second study, more analytical while more limited in scope, similar results were found. In this study, Carol’s group also compared learning by the book (LBB) with learning by doing (LWD). The LBB group were given training manuals and assigned portions to work with. After a set period of learning, they were given performance tasks. This cycle was repeated three times. The LWD learners received the first task at the start of the experiment, as they completed each task, they received the next one. There was also an SM by-the-book group and an SM learn-by-doing group.
So there are two ways to look at the study: MM versus SM as with previous study, and LWD versus LBB for each of those formats. To make that clear, both sets of LWD learners received at the start both the training materials and the relevant performance test to complete; both sets of LBB learners had a fixed amount of time to work with the training materials (which included practice) before receiving the performance tests.
Among the things that happened:
MM learners completed 58% more subtasks than SM learners did.
LWD learners completed 52% more subtasks than LBB learners did.
MM learners were twice as fast to start the system up as SM learners.
MM learners made fewer errors overall, and tended to recover from them faster.
Mistakes were made.
One outcome was the sort of thing that makes management unhappy and training departments uneasy: the average participant made a lot of errors and spent a lot of time dealing with them. Carroll and his colleagues observed 6,885 errors and classified them into 40 categories.)
Five error types seemed particularly important-along the accounted for over 46 percent of the errors; all were at least 50 percent more frequent than the sixth most frequent error…
The first three of these were errors that the MM design specifically targeted. They were imprtant errors: learners spent an average of 36 minutes recovering from the direct consequences of these three errors, or 25 percent of the average total amount of error recovery time [which was 145 minutes or nearly half the total time].
The MM learners man significantly fewer errors for each of the top three categories-in some cases nearly 50% less often.
This to me is an intriguing, tricky finding. A high rate of errors that includes persistence and success can indicate learning, though I wonder whether the participants found this frustrating or simply an unusual way to learn. I’m imagining variables like time between error and resolution, or number of tries before success. Do I as a learner feel like I’m making progress, or do I feel as though I can’t make any headway?
The LWD participants (both those on MM and on SM) had a higher rate for completing tasks and a higher overall comprehension test score than their by-the-book counterparts. So perhaps there’s evidence for the sense of progress.
Was that so hard?
Following the trial, Carroll’s team asked the participants to imagine a 10-week course in office skills. How long would they allow for learning to use the word processing system that they’d been working with. The SM people thought it would need 50% of that time; the MM people, 20%.
Slicing these subjective opinions differently, the LBB (learn-by-book) group estimated less time than the LWD (learn-while-doing) group. In fact, LBB/MM estimated 80 hours while LWD/MM estimated 165.
What this seems to say is that in general the MM seemed to help people feel that word processing would be easier to learn compared with SM, but also that LWD would require more time than LBB.
♦ ♦ ♦
The post you’re reading and its predecessor are based on a single chapter in The Nurnberg Funnel-and not the entire chapter. Subsequent work he discusses supports the main design choices:
Present real tasks that learners already understand and are motivated to work on.
Get them started on those tasks quickly.
Encourage them to rely on their own reasoning and improvisation.
Reduce "the instructional verbiage they must passively read."
Facilitate "coordination of attention" — working back and forth between the system and the training materials.
Organize materials to support skipping around.
I can see-in fact, I have seen-groups of people who’d resist this approach to learning. And I don’t only mean stodgy training departments; sometimes the participants in training have a very clear picture of what "training" looks like, what "learning" feels like, and spending half their time making errors doesn’t fit easily into those pictures.
That’s an issue for organizations to address-focusing on what it really means to learn in the context of work. And it’s an issue for those whose responsibilities include supporting that learning. Instructional designers, subject-matter experts, and their clients aren’t always eager to admit that explanation-laden, application-thin sheep-dip is ineffective and even counterproductive.
CC-licensed image: toy train photo by Ryan Ruppe.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:01pm</span>
|
At the Innovations in e-Learning Symposium this week, Dan Bliton and Charles Gluck from Booz Allen Hamilton presented a session on "failure-triggered training." I was really impressed by their description of a study that explored different approaches to reducing the risk of phishing attacks in a corporate setting. For one thing, as I told Charles immediately after the session, they invented the flip side of a job aid. But I’m getting ahead of myself.
In this post:
Their session description (from the symposium brochure)
My summary of the session, with a few excerpts from their presentation
(I’ll repeat this link a few times in this post; all those links are for the same set of materials. You don’t need to click more than once.)
(At least) three implications for improving performance
The session description
Study Results: Failure-Triggered Training Trumps Traditional Training
We didn’t expect our highly interactive eLearning (that generated great post-test scores) to be completely ineffective in changing behaviors in the work environment! Could the same eLearning be made effective if delivered as failure-triggered training? Come learn the outcomes of a blind study of nearly 500 employees over nine months which analyzed multiple training approaches. The study shows that the same eLearning was significantly more effective when delivered as spaced events that employed learning at the point of realization. This combination of unannounced exercises and failure-triggered training (a See-Feel-Change approach) significantly reduced improper responses to phishing attacks by 36%.
I didn’t ask Bliton or Gluck about this, but "see-feel-change" seems related to what John Kotter talks about here: making a seemingly dry or abstract concept more immediate and concrete.
What I heard: BAH’s study
(Note: this is my own summary. I’m not trying to put words in their mouths, and may have misunderstood part of the session. If so, that’s my fault and not theirs. In no way am I trying to take credit either for the work or for the presentation by Dan Bliton or Charles Gluck.)
The Booz Allen Hamilton (BAH) study, involving 500 employees over 9 months, analyzed different training approaches to "phishing awareness." The training aimed at making employees aware of the risks of phishing attacks at work, with the goal of reducing the number of such attacks that succeed.
The study wanted to see whether interactive awareness training produced better results than static, page-turner training. In addition, the study used fault-triggered training, which Bliton and Gluck explain this way:
Unannounced, blind exercises [simulated phishing attacks] delivered in spaced intervals, combined with immediate, tailored remedial training provided only to the users that "fail" the exercises.
In other words, if you click on one of the fake phishing attempts, you immediately see something like this:
BAH divided the study participants into three groups:
The control group received generic "training" about phishing that did not tell them how to respond to attacks.
The wiki group’s training consisted of a non-interactive pageturner, copied from a wiki.
The interactive group’s training included practice activities (how to identify likely phishing, how to respond).
In post-training comments, the Interactive group gave their training an overall score of 3.8 out of 5. As the presenters noted somewhat ruefully, the Wiki group gave theirs 3.7 - and the control group gave theirs 3.4. (See slide 11 in the presentation materials.) The page-turning Wiki group actually felt better prepared to recognize phishing than the Interactive group.
Posttest questions indicated that 87.8% of the Wiki group and 95.6% of the Interactive group knew whom to notify if they spotted suspicious email.
From the response to the first simulated attack, however, Dan and Charles learned there was no significant difference between the three groups (Control, Wiki, Interactive) — nearly half the participants in each group clicked the link or replied to the email.
What happened next at BAH
Over six months, participants received three "exercises" (mock phishing attempts). "Failure" on these exercises consisted of either clicking an inappropriate link (producing an alert like the example above) or replying to the email — hence, "failure-triggered training."
The study provide good data about actual performance, since it captured information like who clicked a link or replied to the simulated phishing.
Incorrect responses fell dramatically between the first and second exercises, and further still between second and third:
Bliton and Gluck attribute this decrease to two main factors: the spaced-learning effect produced by the periodic exercises, and "learning at the point of realization," since what you could think of as failure-feedback occurred just after someone responded inappropriately to what could have been an actual phishing attack.
If you’re familiar with ideas like Gottfredson and Mosher’s Five Moments of Need, which Connie Malamed summarizes nicely, this is #5 ("when something goes wrong").
I’ve left out plenty; if you’ve found this description intriguing, take a look at their presentation materials. I can tell you that although Bliton and Gluck’s presentation at IEL12 had a relatively small audience, that audience really got involved: question, opinions, side conversations-especially striking at 4 o’clock on the last afternoon of the symposium.
What I thought, what I think
This approach is much more than training, in the sense of a structured event addressing some skill or knowledge need. I told Charles Gluck that it’s almost the flip side of a job aid. A job aid tells you what to do and when to do it (and, of course, reduces the need to memorize that what-to-do, since the knowledge is embedded in the job aid).
At first I thought this approach was telling you what not to do, but that’s not quite right, because you just did what you shouldn’t have. You can think of it as being like a ground fault circuit interrupter (GFCI), a special type of safety device for an electrical circuit.
GFCIs can respond to a problem too small for a circuit breaker to detect. So you’re blow-drying your hair, when click! the wall outlet’s GFCI trips, safely breaking the circuit and interrupting your routine. Not only do you avoid a shock; you also have feedback (if you know about how GFCIs work) that you’d been at risk from electrical hazard.
In the same way, BAH’s mock-phishing exercise interrupts the flow of work. By following the interruption with immediate, relevant, concrete feedback, as well as an offer for further details via a brief training program, this short circuit is turned into a smart circuit.
Which to me opens the door to — let’s use a different term instead of "failure-triggered" — task-triggered performance support. Like a virtual coach, the BAH exercises detect whether I responded inappropriately and then help me not only to recognize and even practice what to do instead.
What I’m leaving out
This was a study and had limits. For one thing, because of the failure-trigger, we don’t know much about the people who didn’t click on the phishing attempts: have they really mastered this skill, or did they just not happen to click on these trials?
There’s also some data about the best response (don’t click the link, do report the attempt), though the numbers seem very small to me. (I don’t recall anyone asking about the details on this topic, so I could well be misunderstanding what the numbers represent).
On the corporate-culture side, what happens within the organization? Does this seem Orwellian? Can the organization apply it as formative feedback intended to help me improve, or do I just end up feeling that Somebody’s Watching? I’d like to look for some data about the effects of retail mystery-shopper or secret-shopper programs, a similar activity that can seem either like trickery or like process improvement.
What about habituation? Will the effectiveness of this approach fade over time?
Most intriguing: can you harness this as a form of ongoing training? For example, along with informing people about some new security threat, create and send out further exercises exemplifying such a threat. Their purpose would be to provide a kind of on-the-job batting practice, with "failure" producing two-part feedback ("You missed this security threat, which is…" "To find out more, do this…").
Dan Bliton, Charles Gluck, and their colleagues have done more than make BAH more secure from phishing. They’ve also shared a creative, practical experiment.
Dave Ferguson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Aug 19, 2015 05:01pm</span>
|