Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

Posted by Christine Schaefer Health care organizations must have effective approaches to build and maintain a workforce with the skills to meet patients’ needs. Baldrige Award-winning St. David’s HealthCare Senior Vice President of Human Resources Richard Lowe outlined the St. David’s HealthCare system for ensuring a talented workforce during the Baldrige Program’s Quest for Excellence® conference last month. The Austin, Texas-based health system employs more than 8,000 in its six hospitals, six ambulatory surgery centers, and numerous physicians’ practices. The organization has structured its human resources (HR) function through "centers of expertise" that manage workforce recruitment, compensation and benefits, service excellence, and an institute for learning that supports employee development. In addition, said Lowe, HR support teams based at St. David’s HealthCare facilities focus on ambulatory surgery and physician service lines. The organization’s strategic planning process drives the HR strategy, with key metrics such as workforce productivity, turnover, and engagement identified and tracked. HR objectives, which are established as part of annual strategic planning process, are tightly aligned with overall business objectives, said Lowe. Those objectives are also anchored in the organization’s "ICARE" values of integrity, compassion, accountability, respect, and excellence. "It’s very important from a systems perspective to have alignment across our system," he said. Therefore, the organization has standardized approaches to workforce policies, compensation and benefits, recruitment, performance management, and learning and organizational development. For example, St. David’s Institute for Learning provides a centralized and standardized means to enhance the ICARE-based culture throughout the organization. Centralized offerings in leadership and organizational development allow for standardization in areas such as the new-employee orientation process, leadership development, and general training for employees, while also providing flexibility to adapt to changing needs, said Lowe. And the Academy for Clinical Excellence—with offerings such as the New Graduate Nurse Immersion Residency and the Specialty Nurse Accelerated Program (SNAP), as well as continuing clinical-education offerings and competency development and coordination—represents a way that St. David’s has grown its own approach to supporting high performance. According to Lowe, his organization has worked hard with local communities to cultivate opportunities for mutual success. The health system’s outreach commitments include education partner relationships and active participation in local, state, and regional associations and coalitions that benefit community health and well-being. For example, St. David’s collaboration with Texas State University resulted in the St. David’s School of Nursing. The program provides the health care organization with a "deeper pipeline" of new workforce members; it simultaneously benefits the university through increased enrollment and the local community through retention of graduates residing in the local area. St. David’s has also partnered with the nonprofit Goodwill to create workforce development programs, which it can tap for ancillary support, said Lowe. With a focus on maintaining its ICARE culture, St. David’s ensures that its prospective workforce is introduced early to the organization’s five core values. In fact, Lowe said that reviewing and accepting those ICARE values takes place before individuals can even apply for jobs, to help ensure each new hire’s "fit" with the organization’s culture. To the same end, "high-functioning employees" are asked to conduct peer interviews on review panels to screen job applicants for fit with the cultural values. "Infusion of talent is critical to our sustained success; it’s just as important that that new talent understand the importance of our values," said Lowe. When CEOs of the various St. David’s facilities review the ICARE values as part of new-employee orientation, he added, "They don’t just talk about the values; they talk about why they matter." For more information about the workforce-focused approaches of St. David’s HealthCare and other Baldrige Award recipients, see the organizations’ profiles and award application summaries on the Baldrige Program’s website. And please comment below to share how other high-performing organizations you know get and grow employees for the benefit of the organization’s future, customers and other stakeholders in the community, and individuals’ careers too.
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:17pm</span>
Posted by Dawn Marie Bailey The Baldrige Program recently became an endorser of Manufacturing Day, which is co-produced by several organizations, including the Manufacturing Extension Partnership (MEP), National Association of Manufacturers, Manufacturing Institute, and the Science Channel. The goal of the day, which occurs this year on October 2 (although many states and manufacturers plan events for a full week or throughout the year), is to give manufacturers an opportunity to open their doors and show, in a coordinated effort, what they and manufacturing as a whole offer to the economy. They also can work on their challenges: addressing a skilled labor shortage, connecting with future generations, taking charge of the public image of manufacturing, and ensuring the ongoing prosperity of the whole industry. A recent MEP blog outlined the top challenges keeping manufacturing CEOs up at night: continuous improvement, growth, workforce needs, technology needs, supply chain needs, and product innovation/development. The Baldrige Program is a natural participant in Manufacturing Day. In 1988, it was created by Congress and named after the Secretary of Commerce to develop, educate about, and promote a criteria to help manufacturers become more competitive with their global counterparts. Baldrige continues to promote manufacturing through its products and services, including the free, downloadable, self-assessment Baldrige Excellence Builder. If you want to learn more about how to participate, consider attending a free webinar on June 8 from 2 to 3 pm EST on how and why to get involved in Manufacturing Day. In addition, anyone can find a local event in his/her community that is associated with the day. Schools and students at all levels may find particular benefit from Manufacturing Day by participating in real or virtual field trips.
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:17pm</span>
Posted by Harry Hertz, the Baldrige Cheermudgeon I wasn’t going to write about my recent experience, but further reflection compels me to share some thoughts. I recently fulfilled part of my civic responsibility by serving on a circuit court jury. My first conclusion was that our court system is static. Let me explain. Every time legal counsel approached the judge, the judge flicked a switch to engulf the courtroom in noisy static, so that the conversation at the bench could not be heard by others in the courtroom. By the tenth or fifteenth time this happened, I burst into a smile that must have confused anybody observing my behavior. I had suddenly concluded that our legal system was part justice and fully static. At the end of our first day of jury deliberation, we had a hung jury on the last count against the defendants. The judge instructed us to continue deliberating, but would not let us stay, as we wished to do. Court was recessed for the day, requiring us to return for another day. I could recount some other experiences related to my juror’s perspective of the trial process, but let me move on to my subsequent reflection. I have always considered education and health care to be somewhat unique industries. Both of these endeavors are characterized by highly trained, highly knowledgeable, and highly independent knowledge workers: teachers and physicians. Furthermore they are characterized by having "unpaid workers" (students and patients) who are key contributors to the success or failure of the services delivered (education and health care).  And yet those same people are also important customers who must be satisfied, and will hopefully be loyal, to the institution. My revelation after serving on the jury was that our court system is similar to our education and health care systems. The judges have the same characteristics as the teachers and physicians, And the jurors are very similar to the students and patients. Yet I doubt we ever consider these similarities. I am certain there is some cross-sector learning that would be valuable. How do you engage these "customers" in all three sectors so that they are satisfied with the service and eager to engage as workforce members in their own self-interest, as well as the common interest of improving the overall product? How do you solicit their honest and thoughtful input into process improvement and then act on it? How do you get joint ownership for brand image among the key knowledge workers and the unpaid workers? How do you encourage and foster partnering between the independent knowledge workers and the paid administrative professionals, critical to system success? I would like to be part of the above discussions. Maybe we could encourage use of the Baldrige Excellence Framework and Excellence Builder as an approach for better communication and cooperation. And maybe there are more "industries" that share these characteristics and should be engaged in the dialog. Certainly, all of us as taxpayers could benefit from the outcomes.  
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:17pm</span>
Posted by Dawn Marie Bailey With the Belmont Stakes this weekend and a potential triple crown victory after more than 30 years, today, I’m writing a short blog, which is pretty unusual for me. But that’s because I came across another blog that I thought would be a fun and interesting read for the Baldrige community: "What Kentucky Derby Handicapping Can Teach Us About Organizational Metrics." In trying to determine the finishing place for race horses, a "complex exercise in data science," author Nicole Radziwill realized that she was essentially following the Baldrige analysis process "LeTCI" to determine whether an organization "has constructed a robust, reliable, and relevant assessment program to evaluate their business and their results." "And what does this mean for organizational metrics?" she writes. "To me, it means that when I’m formulating and evaluating business metrics I should take a perspective that’s much more like handicapping a major horse race—because assessing performance is intricately tied to capabilities, context, the environment, and what’s bound to happen now, in the near future." Click on the link above to read the article. And here’s some more food for Baldrige thought: Results are important, but they must derive from process. What ADLI indicators might you look for in a prize-winning horse? What other innovative ways have you used Baldrige outside of an organizational assessment?
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:16pm</span>
Posted by Christine Schaefer The following interview highlights how 2014 Baldrige Executive Fellow Steven J. Kravet, M.D., M.B.A., F.A.C.P., has drawn on cross-sector learning from leaders of Baldrige Award-winning organizations to advance his health care organization’s performance. As head of Johns Hopkins Community Physicians, Kravet has created and launched a systematic communications plan to better connect and engage a diffuse workforce of approximately 1,600 physicians and staff members with the organization’s mission and strategy. Dr. Steven J. Kravet, President of Johns Hopkins Community Physicians, Baldrige Executive Fellow First, could you please tell us about Johns Hopkins Community Physicians and your leadership role? I am president of the organization, which has about 400 physicians and about 1,200 employees altogether and 40 practices across the greater Baltimore, Maryland, and Washington, D.C., region providing primary care, specialty care, and hospital-based care. We are integrated into the Johns Hopkins Health System [of John Hopkins Medicine]; all of the physicians and all of the staff members of Johns Hopkins Community Physicians (JHCP) are employed by the health system. (Physicians affiliated with Johns Hopkins in private practice may teach in or have other relationships with the system, but the only physicians employed by it are either the full-time faculty members of the [Johns Hopkins] university or our physicians of JHCP.) I report to the president of the health system. I have a leadership team, including clinical, operational, quality, and finance leaders to oversee this organization. We’ve been in existence as an organization for over 30 years. I’ve been in this role for six years. I’ve been with Hopkins overall for 23 years. What was the value of the Baldrige Executive Fellows program for you as a leader? I had long been a fan of the Baldrige [Excellence] Framework. Having the opportunity to dive in deeply to the background and philosophy as well as the opportunity to see the framework in action was priceless. From the onset of the program, I was able to take elements back to my organization, which I am certain has paid back the cost of the program several times over already. Tell us about how you’ve applied learning from the Baldrige Fellows program in your work? Sure; some specific work that’s been under way is about enhancing the patient experience. We’ve worked on some principles of service excellence and employee engagement learned from the Ritz-Carlton and from K&N Management—two Baldrige Award winners to which I was exposed through the Baldrige Fellows program. We’ve enhanced our communications strategy—that was my capstone project—which is in some ways similar to what the Ritz-Carlton program is all about. And we’ve begun looking differently at staff development. We’ve also begun to talk about how visual displays of data could help us in our performance-improvement activities. Some of those things are mapped out as plans for the next several months. I’m hoping to take advantage of the generosity of the Baldrige Award winners to host members of my executive team to take a deeper dive into some of those elements. My entire executive team attended the Baldrige Quest for Excellence® conference [in April], which was great. It generated lots of conversations about the "art of what is possible"—the sense that Baldrige is something that is inspirational and achievable, with the right focus. Immersing the team for three or four days was a really valuable part of the experience. Our hope is to dive more deeply over the next several months. Would you please elaborate on how you’ve enhanced your organization’s communications strategy through your capstone project for the Baldrige Fellows program? Sure. I set out to create a more well-defined and structured communications strategy following two basic principles: One is the notion of a cycle of communications linked to strategic planning. The concept is to have bidirectional communications to help influence our strategic planning process and solicit input more purposefully from the front lines of the organization and also from the board of trustees and other levels of our organization, as we set out in our strategic planning. And we did that over the course of this year. The other cyclical component that is part of the capstone is to have a link to the mission and vision statements of Johns Hopkins Medicine and the strategic priorities. So what we’re doing is having themes that repeat over the course of the year. We have six strategic priorities at Johns Hopkins, and the goal is to have each of those strategic priorities reflected two times throughout the year as a component of our communications strategies. And then, in each of those months when we focus on one of those strategic priorities, we would have a multimedia modality approach toward communicating them. The reason for that has to do with an assessment we did where we discovered that people across the organization have very different desires for how they like to receive information. Some people like e-mails or newsletters, some people like staff meetings, and some people like to watch webinars. So what we’re doing is taking the theme of the month and spreading them across all of the media modalities so that everybody will have heard the same message. The newest form of communications is one I adopted from the Ritz-Carlton, which that organization calls a "daily report" or a "daily line-up." Our "Week in Focus" is a set of talking points that are scripted and balanced; those will be distributed to management throughout the organization, with an expectation that, at some level in the organization, they will be read aloud to a unit of employees. So once a week everybody is going to hear a set of talking points that are based on what members of the executive team think is most important to communicate. The purpose is to make sure everybody is hearing the same message. Would you please describe how the weekly communications are structured? The entire report is called The JHCP North Star; my capstone was entitled Galactic Communication Strategy: Bidirectional Communication to Improve Connection to Mission and Perceptions of Opinions Count in Johns Hopkins Community Physicians. The reason for the galactic reference is the notion that, just like our solar system creates a cycle of seasons, it’s valuable to have a cycle to communications. The "north star" reference is to the notion that we all need a guidepost, or a beacon, that helps give us direction—so we are linking everybody to the organization’s mission through this North Star strategy. The first section is "The Week in Focus" and communicates the most important points of the executive team. The next section has to do with employee engagement. What we’re doing in this section is linking in a structured way one of our eight core values or one of the engagement questions from the Gallup Q12 survey. There will be a different theme each week. We will share an example and explain that we want them to have a discussion on other examples in their particular unit. So, for instance, there might be a question about having all the right tools to do their jobs well; we’ll give them an example of what that means and ask them to have a discussion on how they’re ensuring in their unit that they have all the tools to do their job well. A sample JHCP North Star report; image used with permission The next section is about the patient experience; in that section, we go through a series of questions from our patient engagement surveys. We’ll ask how, in their role, they’re ensuring that patients are receiving patient-centered and excellent care. If the question is about access, for instance, we’ll give them an example about what patients might perceive about access, and we’ll ask them to have a conversation in their unit to talk about what they are doing in their unit to ensure access. The next week, the theme might be about patient communications skills or about wait times. So each week, we’ll have a different engagement theme for staff and a different patient-experience theme. The last section relates to strategic priorities. In each weekly communication, we’ll include one bullet with an example of a strategic priority at the Johns Hopkins Medicine level and one bullet with an example of a strategic priority of Johns Hopkins Community Physicians. For example, one strategic priority is about discovery, so we’ll share something going on about discovery at the Johns Hopkins Medicine level and something at the Johns Hopkins Community Physicians level. By the end of the month, there will be four examples for each, and those will be captured in a monthly newsletter. So even if employees miss a weekly report, they all have an opportunity to read the summary. Is there a common time and forum for the planned weekly discussions based on these communications at units across your organization, or does each group figure out what works for them? Closer to the latter: some units have weekly staff meetings; if they don’t, there might be a huddle. The expectation is that all employees will spend at least five minutes each week connecting to the [Johns Hopkins Medicine] mission. What challenges have you faced in developing and implementing this initiative, and how have you overcome them? Part of the challenge is that people are busy, and when we give them more things to do, they have to believe that this extra communication that we’re asking them to do is going to ultimately help them. It’s like saying to somebody, "I know this medicine tastes bad, but it will make you feel better in the end." There’s a lot of trust in that. We’re trying to get people to respect that this is an important strategy. I think one of the important parts is that we acknowledged that we had an opportunity to improve [employee] engagement. There was clearly a case for change that we wanted to address. In a series of surveys that we did in the beginning of planning this, we asked people how they wanted to receive communications; we also asked them how they wanted to shape the direction of the organization. So we were really trying to address the issue of how their opinions count all along the way. How is this all progressing, and how will you measure the impact? My goal was to launch the new tool by the end of the year. Our first four weeks of communicating with this new North Star communication tool will be complete by June. So we have the draft all done. We’ll refine it month by month. We’ll try to get some feedback from folks about the communication tool: Is it readable enough? Is it relevant? Is it something that we want to serve up for them to be read, or do we want to encourage more engagement through conversation? We’ll measure our success in relation to two specific elements of our workforce engagement survey—one is about employees’ connection to the organization’s mission, and the other is about employees’ feeling that their opinions count. Our 2013 results for those two survey questions had decreased significantly from the previous year. So the annual engagement survey administered by Gallup is one measure. How do you see the project evolving in the future? What we’re hoping to do overall with [the Baldrige framework] over the next year is to assign categories [of the Baldrige Criteria for Performance Excellence Criteria] to each of the members of the executive team. Our hope is that over the next two or three years, our organization might be prepared to do a state [Baldrige-based program assessment/award] application.
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:16pm</span>
This blog continues from part I. Posted by Dawn Marie Bailey How Manufacturers Use the Baldrige Criteria to Focus on the Future (continued) Baldrige Award recipient Lockheed Martin Missions and Fire Controls (MFC), a $7 billion dollar business, manufactures high-precision systems that protect the country and men and women in uniform; these systems "have to work the first time, every time, because lives depend on it," said Steven Sessions, then director of supplier quality, speaking at the 26th Annual Quest for Excellence Conference. He talked about how MFC used the Baldrige model to help improve and manage its supply chain, an effort that began when a senior MFC staff member became a Baldrige Executive Fellow and benchmarked other Baldrige Award recipients on how they handled supply chain management. "The global recession and budget pressures have probably never been more intense than they are right now. That, along with increased regulations, have really been a big hit to our businesses, and we’re trying to figure out how to account for that," said Sessions. "But as much as it affects us, it affects our suppliers—and some of them are very small—in a very big way. Because of that, the defense supply chain is a real focus area." From the Baldrige learning, MFC created a Supply Chain Engagement Model that maps to the Baldrige model, a process called Senior Leadership Engagement, and Characteristics of Supplier Excellence. "The Malcolm Baldrige Award that we got really helped open up . . . doors," Sessions added. "I’m not so sure that we would have had the gains that we’ve made over the last year had we not won the award because that brings with it interest from other companies that want to know how you’re doing business. . . . When you talk about the bottomline . . . for us it doesn’t get much better than this: We outperform the market. We outperform others in our industry. . . . When you get your supply chain working, . . . it helps your costs to come down. Baldrige was a big part of making that happen." Professional Development and Bringing the Learning Home The 2015 Malcolm Baldrige National Quality Award Board of Examiners includes several experts from manufacturing who attend the training to hone their skills for their own manufacturing organization and for personal professional development. Baldrige examiner Eric Smith, a process control engineer for Caterpillar, said he uses the Baldrige Framework for continuing education. In a supplier development/quality role, Smith said Baldrige training provides additional skills as an auditor and highlights practices suppliers should follow to enable them to improve their organizations. "I use Criteria practices to offer advice on improvements that can be made to management processes that in turn should result in improved products delivered to my organization," he said; "The Criteria are aimed at senior leadership practices. This is the area that other standards/methodologies do not cover (such as ISO9000 standard).  Learning these practices provides me deeper insight to company operations when I perform audits on my suppliers. When I discover opportunities for improvement in an organization, I have been able to suggest changes in leadership practices that would be beneficial." Larry Kimbrough, supplier quality engineer for International Truck and Engine, said Baldrige training has taught him how to look at a process subjectively as it relates to meeting the Criteria. He added, "my organization does not hesitate to ask my advice when it comes to processes and quality issues. By use of the [Criteria] categories (voice of the customer, leadership, results, etc.) and evaluation, I am able to better assist my company when they come to me with process or quality issues." Robert Tabler, director of Operational Excellence, Global Equipment, Sandvik Mining, just completed his first year of training as a Baldrige examiner. He said his expectation is that training in the Criteria and his work as a Baldrige examiner "will be used to improve customer focus within my area of responsibility. I hope successes can then be expanded into other areas through sharing and communication." What’s the Competitive Advantage for Manufacturers? Can Baldrige Actually Save Them Time? "Baldrige does separate you from your competition in the eyes of the customer," said Du Fresne, citing client assessments that rated the company above the competition in seven of eight metrics and 40% of the market share with customers with whom it does business. Asked his opinion of why more manufacturers are not using the Baldrige Criteria to support their operations, Garvey said,  "A typical manufacturer always gives the excuse I don’t have time for this. I’ve got too many pressing issues. I have customers calling me all the time. I have employees calling out sick. I have equipment that may or may not be running properly. I’ve got creditors that I’ve got to take care of. . . . My response is you don’t have time not to do this. . . . You have to make time to do this. Because once you take the time to investigate and implement these Criteria, then the rest of your day becomes much freer. . . . Once you invest the time, then the return is orders of magnitude."
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:13pm</span>
Posted by Dawn Marie Bailey No one can deny that there are plenty of quality tools out there to improve performance—of a team, of a process, of a product—but to integrate those tools and know where to apply them for the good of the whole organization, so that learning can be applied and the system can most effectively use resources, that’s where the Baldrige Excellence Framework comes in. Whether you describe it as a blueprint or a map,  it is the framework that should guide how and where you apply quality tools. To borrow two quotes from recent interviews with quality experts, "The Baldrige framework is like the blueprint of a building, with ISO used for specific systems within the building such as electrical and air conditioning systems." (Ron Schulingkamp) "Baldrige is the overall organizing framework that can identify where there are problems. . . . Think of Baldrige like a map that will show the organization where . . . Six Sigma, Lean, and other tools should be deployed. . . . If an organization deploys [such tools] without an overall map as Baldrige, it would be like taking a trip in a car but not having a map to know the way." (Gene O’Dell) And here’s another expert from Quality magazine who writes about the Baldrige Criteria’s complementary nature with business process management (BPM) objectives. In "Aligning BPM with the Seven Categories of the Malcolm Baldrige Award," Forrest W. Breyfogle III, the founder and CEO of Smarter Solutions Inc, writes, "Most organizations use the Baldrige categories to build up a total performance map in order to rule out areas that require improvement. Along with this, organizations may also rely on tools, such as BPM, to devise operations and enhance organization processes." He describes BPM as a way to take control of processes, and aligning BPM with the "high-performing business processes" gained from using the Criteria can lead to "economic viability, efficient operations, conservation of natural resources, and social responsibility." "Therefore, it can be said that the success of BPM, along with other business process tools, can be improved . . . through the Baldrige Criteria," he writes. "The effective alignment could be a source of increasing improvement to even more advanced developments. Breakthrough progress gives organizations the highest competitive edge in all circumstances. . . . By relying on the Criteria, businesses are steps closer to attaining higher levels of productivity and profitability, better employee relations, improved market share, and customer loyalty." Breyfogle adds, "By taking the seven Baldrige categories into consideration, well-developed and balanced results can be expected. Any misalignment could mean that there is something wrong with the business processes or other areas in the organization, making things easier and more efficient for organizations to implement business and measure performance. . . . The Baldrige Criteria, therefore, serve as strong criteria to conduct self-assessments and benchmark an organization’s processes and methods with those companies rewarded by the Baldrige Award."
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:11pm</span>
This blog is the second in a two-part series. See the first part. Day 6: Take a strategic and systematic view of process improvement. Unfortunately, according to the 12 Days of PEX-MAS survey by the Process Excellence Network, a division of the International Quality and Productivity Center, process improvement can often be "pigeon-holed into delivering cost savings or efficiency gains rather than as an enabler of corporate strategy," with a perception that improvement can mean eliminating jobs. To combat this perception, survey respondents suggest "take a wider, more strategic and systematic view of how [employees’] work fits into corporate strategy. . . . Instead of coming from the perspective of ‘we want to do process excellence’ and then trying to link it to strategy, we need to look at the strategy targets and goals first. Then use whatever tools and techniques to best achieve these goals." The Baldrige Criteria are in agreement with this, not prescribing one tool or methodology to achieve success; rather, the Criteria serve as an overarching framework for improvement across an organization. Days 7/8: Define process excellence for what holds meaning for your organization. Survey respondents list several names that they use for "process excellence," including "operational excellence" and "continuous improvement." In the case of the Baldrige Criteria, many organizations such as the Tata Group and Turner Broadcasting Systems use the Criteria internally but rename elements and adapt language to match their own cultures. There even have been cases of "stealth" Baldrige reported— where organizations are using the Baldrige Criteria but calling their use something else to avoid any preconceived notions or anxiety about an improvement program. Day 9: Prioritize process within your organization. According to the survey, "There is a risk . . . that if process improvement only is associated with solving a specific problem at a specific point in time, that it becomes something that burns brightly initially but quickly burns itself out. . . . If you are able to show results, people want to know how it was achieved and they become interested." The Criteria have a strong focus on learning and feeding that learning back into improving processes. How you innovate is also important across all areas of an organization’s operations. Item 6.1, Work Process, goes into some detail on process performance, process improvement, and innovation management. (See the free 2015-2016 Criteria Category and Item Commentary.) Day 10: Involve all employees, including senior leaders, in process improvement initiatives. "If it’s always the process improvement experts who are leading process improvement, then it’s not building culture," according to the survey. The best models have every level of the organization involved in process improvement. This rings true for most high-performing organizations and all Baldrige Award recipients. For example, the senior leaders of 2014 Baldrige Award recipient PricewaterhouseCoopers Public Sector Practice (PwC PSP) monitor key metrics to control overall costs and work with their operations leaders and the practice’s Quality Management Group to make decisions. Team members continuously assess quality to prevent defects, service errors, and rework before dealing with clients. Such initiatives involve all levels of the organization. Days 11/12: Use and invest in technology to improve processes. There’s no question that technology has the potential to improve processes. According to the survey, "the technology that has emerged as a frontrunner for investment is big data and analytics technology," with over 33.8 percent of respondents indicating that they plan to invest in data analytics and big data technologies. In alignment with this, "big data" is called out as an emerging theme in the 2015-2016 Baldrige Criteria, too. "For all organizations, turning data into knowledge and knowledge into useful insights is the real challenge," according to the survey. Dr. Harry Hertz, Baldrige Program director emeritus, discusses the real challenge of big data by focusing on how organizations and governments will manage big data and how  they will properly and appropriately use them. Do you agree with these process excellence challenges and insights for your organization?
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:09pm</span>
Posted by Harry Hertz, the Baldrige Cheermudgeon One of my regular Blogrige readers and harshest critics, my wife, complained that my recent posts have been too pedagogical and lacked my storytelling instincts. So this post is for her. Have you seen the recent commercial about buying a used car? It compares the experience to a dinner out and asks whether you ever worry about having to haggle over price at a restaurant. It then encourages you to shop at a specific used car dealer where you don’t have to bargain about price. Well, my wife and I recently bought a new mattress. So now you are asking what does that have to do with eating out or buying a used car. The answer is simple, my order of preference: eat out, buy a used car, buy a mattress. When you buy a used car, you can compare prices among dealers and even look up average prices for your make and model on the web. You also can look up the blue book value. You can walk into the negotiation as an educated consumer when the salesperson tells you that you are taking food out of the mouths of his or her young children with the price you want to pay. How do they ever stay in business? Fortunately, I buy a mattress even less frequently than I buy cars (run them to the end of their life is my philosophy).  Mattresses are not like cars. Every store is always having a half-off sale as the entry point. Tells you about the list price for starters. Then you are expected to bargain down from the half-off price. Comparison shopping — forget it. Every dealer has different names for the various mattresses from each major manufacturer. The salesperson who we eventually bought our mattress from, even showed us her commission on the mattress for the price we negotiated. Her kids were going hungry on that commission, but she needed the volume. (I hope she isn’t married to a car salesperson or I could be partially responsible for a whole family dying of starvation.) And the deal was so good that she needed her district manager’s approval, which he reluctantly gave according to her report back. So, I should have felt either great or guilty leaving the store. But, I felt neither. I felt like I had to go home and shower to return to normal. Who wins in these negotiations? Maybe the dealer (car or mattress) feels this is necessary to earn a decent return. I never feel good after the negotiation. Why does this practice pervade a few retail industries and not exist in others? Wouldn’t it be wonderful if these retailers used the Baldrige Excellence Framework? How would they answer questions in the Customers category? A few I would like to see answered are: How do you listen to potential customers to obtain actionable information? How do you build customer relationships? How do you manage customer relationships to manage and enhance your brand image, retain customers, and exceed their expectations? How do you determine customer requirements for product offerings and services? Did I get a fair deal on a good mattress? I wish I would ever know. Or better yet, I wish I didn’t have to think about it because I knew that I got a fair quality/price ratio. All I want is a fair transaction for the dealer and for me. Is that asking too much? How about you? And for those of you who are curious, I let my wife preview this post and she approved!
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:09pm</span>
By Christine Schaefer At the end of this month, the staff of the Baldrige Performance Excellence Program will lose a long-time staff member, Sandra Byrne, to retirement. The program’s education specialist by title, Sandra has long acted as an enthusiastic liaison to leaders of organizations in the education sector. She also has leveraged her sector expertise for biannual revisions to the Education Criteria for Performance Excellence and other publications. She has surmounted challenges in wide-ranging team and project assignments over the years, including stints on the program’s past management, training, outreach, and award process teams. As a member of the Baldrige Program’s Education Team in recent years, Sandra has led many processes associated with Baldrige examiner training and the Baldrige Award process, among other contributions. The following interview was designed to elicit some statements from Sandra that would, in effect, spotlight the spirit of her work in advancing the Baldrige Program’s mission to help organizations in every sector of the economy improve their performance. Sandra Byrne takes a break during Baldrige examiner training. How do you feel about retiring as you reflect on your long tenure on the staff of the Baldrige Program? It’s challenging to convey the depth of my feelings associated with leaving the Baldrige Program, to which I feel such attachment and commitment. It’s been an extraordinary 15+ years. And I’m leaving with greatly mixed feelings—excitement and apprehension, anticipation and cautiousness, happiness and sadness. There is so much to look forward to, yet so much to miss. I arrived at the Baldrige National Quality Program (as it was then known) in 1999 with some knowledge of the power of the Baldrige Criteria for Performance Excellence. In the early 1990s, when I was working at the National Alliance of Business, I had researched school districts that were using the Criteria in their improvement efforts. So when I was offered an opportunity to join the Baldrige staff, I was thrilled! I knew I would be exposed to new knowledge and new people, but little did I anticipate how much I would learn—how many bright and wonderful people I would meet, get to know, and call "friend"; and how much I would grow, personally and professionally. Would you please share some of your career highs with the Baldrige Program? My greatest moments with the program are associated with seeing Baldrige in action, so to speak. For example, I’ve always been impressed—and deeply moved by—presentations made by the Baldrige Award recipients at the annual Quest for Excellence® conference and other venues. When I’ve heard (Baldrige Award recipient) leaders such as Rulon Stacy and JoAnn Sternke talk about Baldrige "saving lives" (in health care and in education, respectively), I am inspired and hopeful and proud. As another example, when I’ve been the program’s monitor on site visits (as part of the Baldrige Award process) and observed the interactions of the [examiner] team members with one another and with employees of the applicant organization, I’ve found it fulfilling to be in the presence of these remarkably dedicated people who "get it" and are making such huge, positive differences in the lives of so many people. I also recall fondly attending and presenting on Baldrige at many conferences around the country, where it’s always a great thrill to see the "Baldrige light" go on in somebody’s eyes. I especially appreciated attending a gathering in Missouri in recent years where the governor kicked off a two-day meeting by asking all of his state’s school districts to commit to implementing Baldrige processes. Knowing that an entire state’s worth of school districts were going to start or continue on a Baldrige improvement journey was beyond gratifying for me, especially given that I started my career as an elementary school teacher (lo, those many years ago!). Given your humility, I know you’re loath to "toot your own horn" about your individual achievements; however, would you please share at least one team or group achievement during your time with the Baldrige Program in which you take great pride? Alright; that seems fair. Within the walls of the Baldrige Program, I’m proud to have been part of the team that first addressed the challenge of our funding situation back in the 2012 transition year without a federal appropriation for the first time. We worked very hard for several weeks to come up with solutions to cut expenses, while increasing revenues. That work is still in progress, of course, but our team did a yeoman’s work to meet the initial challenge, and I’m very proud to have been a member. Now I’m going to lead a question with an observation that may embarrass you but that many of your coworkers consider central to your legacy: You have long been admired for your exceptional ability to connect people. You’ve always "warmed the room" as a facilitator for Baldrige examiner training and conference presenter, creating fertile ground for everyone’s learning. And you’re revered on our staff for voluntarily mentoring and otherwise supporting new and younger employees, as well as student interns. (Of course, I’m personally indebted to you for serving as a professional mentor for the past decade.) Through such voluntary roles, you have been credited by colleagues for strengthening teams of staff members and volunteers alike through a culture that fosters collaboration and high engagement. Could you please comment on that? Christine, you ARE embarrassing me! I will comment on that, but before I do, I must tell you how flattered I am by what you’ve said and let you know how gratified I’ve been when I’ve had those kinds of opportunities. I really enjoy meeting and getting to know people and then connecting them with others, especially when I’m sure there’s a reason for them to know one another. The "Baldrige Family," a term we use often, is composed of extraordinary people committed to bettering organizations to benefit others. Our examiners, who donate so much effort and time, should all receive a "good citizen" award. I’m privileged to have had opportunities to encourage their learning and development as examiners, all the while knowing that I’m learning more from them than they report they’re learning from me. The organizations that I’ve visited—all of whom are committed to ongoing improvement—it’s such an honor to see them "doing Baldrige." And when they tell me I’ve contributed to their having a positive experience, I’m that much more honored. Our boards and panels, with whom I’ve worked over the years, are committed, helpful, and willing to do whatever needs to be done. And my colleagues in the Baldrige Program are so conscientious, hard-working, and just plain smart: I’m proud to be a member of their cohort. The Baldrige Family does important, meaningful work that makes positive changes for individuals, organizations, and the country. I feel proud—and lucky—to have been part of it for all this time. Have you made plans for any Baldrige-related endeavors in the next few years? "What will you do?" That’s a question I’ve been getting a lot over the past several months—since the time I announced that I would be leaving the Baldrige Program. It’s a natural question—certainly the first thing I ask others when I hear about their impending retirement. The answer, to be honest, is that I’m not exactly sure what volunteer or other work I might do in retirement. I am open to serving in some capacity to assist efforts to keep growing the number of organizations (in education especially, as well as every sector) that benefit from the Baldrige framework for improvement. I see many related opportunities at this point, but I think I’m going to be quiet for a little while before sorting through them. Those who know me well probably suppose that my sitting quietly won’t last very long, but I’m looking forward to it while it lasts. Any parting thoughts you wish to share here? From the bottom of my heart, I want to thank everyone who has made my time with the Baldrige Program as significant as it has been. Your generosity of knowledge and spirit has helped me to learn and to grow and has touched me deeply. I will remember you with fondness and respect.  
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:09pm</span>
Posted by Dawn Marie Bailey Did you ever wonder who are the folks who judge applications for the Malcolm Baldrige National Quality Award? What in their background brought them to this high honor, and what advice might they have for Baldrige Award applicants, potential applicants, and examiners? In an ongoing series in this Baldrige Program blog, we will be interviewing members of the Judges’ Panel of the Malcolm Baldrige National Quality Award to share individual members’ insights and perspectives on the award process, their experiences, and the Baldrige framework and approach to organizational improvement in general. The primary role of the Judges’ Panel is to ensure the integrity of the Baldrige Award selection process. Based on a review of results of examiners’ scoring of written applications (the Independent and Consensus Review processes), judges vote on which applicants merit Site Visit Review (the third and final examination stage) to verify and clarify their excellent performance in all seven categories of the Baldrige Criteria for Performance Excellence. The judges also review reports from site visit to recommend to the U.S. Secretary of Commerce which organizations to name as U.S. role models—Baldrige Award recipients. No judge participates in any discussion of an organization for which he/she has a real or perceived conflict of interest. Judges serve for a period of three years. Michael L. Dockery, a second-year judge Senior Manager, Memphis World Hub FedEx Express Corporation What experiences led you to the role of Baldrige Judge? I began volunteering on the national Board of Examiners in 2008. My experiences included serving as category lead on several examiner teams; serving as senior examiner and/or team lead for four years; and participating on three site visit teams, including site visit team lead in 2012. During my role as team lead, I received valuable training, coaching, and mentoring from fellow examiners and Baldrige staff throughout the process. The site visit experiences afforded me an opportunity to demonstrate analytical, team building, and leadership skills required to meet the process deliverables for all stakeholders. The developmental programs and knowledge sharing within the Baldrige community also gave me the confidence to take on the responsibility of Baldrige judge once the vetting process was completed. You have a great deal of experience in the service sector. How do you see the Baldrige Excellence Framework as valuable to organizations in that sector?  With an ongoing focus on service quality and marketplace competitiveness, I picture the Baldrige Excellence Framework being beneficial to organizations in the service sector and all sectors that are interested in focusing on the future while meeting current customer demands. I feel that it is important that service organizations invest in a framework that will allow them to respond or adapt rapidly to changing demands, as well as to other challenges in the global market. As the Baldrige framework is refined or updated, it continues to create value that can transform organizations by offering criteria that helps with marketplace competitiveness, an approach to performance excellence, cultural change, and innovation. In addition, the emphasis on a systematic, disciplined approach to process improvement may give organizations confidence to engage in intelligent risk-taking, enhance the strategic planning process, and identify measures for detecting potential blind spots. I am confident that the evolution of the Baldrige Excellence Framework, once fully deployed in the service sector, will be able to connect people, processes, and technology seamlessly for long-term value. How do you apply Baldrige principles/concepts to your own work experiences/employer? I regularly deploy all areas of the Criteria to process improvement initiatives and daily operations. When assessing process improvement opportunities, I take a holistic viewpoint and systems approach to evaluating how new processes may be implemented and the impact these processes could potentially have on the customer experience. The framework helps me to effectively manage multiple projects or tasks while addressing new, changing requirements by internal and external customers.  I have deployed the framework to all organizations I have led since being introduced to the Criteria and share key concepts with organizational employees during staff meetings and management development sessions. As a result, there are systematic, repeated approaches that translate into a culture of innovation, safety, and service excellence, which are core values for the company. By integrating the framework with the company’s quality programs and Lean processes, the leadership team and employees are able to implement or sustain core processes that deliver positive outcomes.  The framework also helps me create an environment of organizational learning and knowledge transfer, by involving all levels of the organization in the improvement process. The process dimensions of approach, deployment, learning, and integration are evident in the organizations I lead, and many of the Baldrige core values, such a visionary leadership, valuing people, and management by fact, are also commonplace. The framework has been instrumental in the company’s ability to refine processes, lead organizations through difficult challenges, and improve business results. As a judge, what are your hopes for the judging process? Or, in other words, as a judge, what would you like to tell applicants and potential applicants about the rigor of the process? I have a genuine appreciation for previous members who served on the Panel of Judges and the time commitment that is required to participate in the judging process.   Obviously, the focus is on providing outstanding customer service, remaining committed to the process, and identifying role-model organizations that have demonstrated best practices and proven processes that can be benchmarked across the industry. The applicants (former, current, and future) can feel confident that all panel members demonstrate a passion, commitment, and desire to protect the integrity of the process and provide all applicants the highest level of service required to deliver the expected value. The Baldrige staff and other stakeholders work extremely hard to ensure that a consistent process is executed to determine which organizations receive consideration for site visits. The same approach, rigor, and ethical standards are utilized by the judges’ panel to identify organizations with role-model best practices and to determine sector award winners. All members appointed to the Panel of Judges provide expert, diverse sector knowledge that is beneficial when evaluating the many processes that various organizations deploy that lead to positive performance results and outcomes. What encouragement/advice would you give examiners who are reviewing applications now? I would like to encourage all examiners to do their best, trust the process, and have fun. The experience provides a great opportunity for skill development, teamwork, mentorship, and systematic execution of core processes. I am excited to see the ongoing collaboration and willingness by experienced examiners to onboard new talent into the Baldrige community. See other blogs from the 2015 Judges’ Panel: Dr. Ken Davis, Laura Huston, Miriam N. Kmetzo, Dr. Sharon L. Muret-Wagstaff, Dr. Mike R. Sather, Ken Schiller, Dr. Sunil K. Sinha, Dr. John C. Timmerman, Roger M. Triplett, and Fonda L. Vera. Greg Gibson, a candidate for the 2015 panel, pending appointment, will also be interviewed for this series.
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:09pm</span>
By Christine Schaefer Did you ever wonder who are the folks who judge applications for the Malcolm Baldrige National Quality Award? What in their background brought them to this high honor, and what advice they may have for Baldrige Award applicants, potential applicants, and examiners? For an ongoing series of blogs on this site, we are interviewing all members of the Judges’ Panel of the Malcolm Baldrige National Quality Award to share their individual insights and perspectives on the award process, their experiences, and the Baldrige framework and approach to organizational improvement in general. The primary role of the Judges’ Panel is to ensure the integrity of the Baldrige Award selection process. Based on a review of results of examiners’ scoring of written applications (the Independent and Consensus Review processes), judges vote on which applicants merit Site Visit Review (the third and final examination stage) to verify and clarify their excellent performance in all seven categories of the Baldrige Criteria for Performance Excellence. The judges also review reports from site visit to recommend to the U.S. Secretary of Commerce which organizations to name as U.S. role models—Baldrige Award recipients. No judge participates in any discussion of an organization for which he/she has a real or perceived conflict of interest. Judges serve for a period of three years. Ken Schiller Second-Year Judge; Co-Owner, K&N Management (PDF), a small business that received the Baldrige Award in 2010. What experiences led you to the role of Baldrige judge? Being a small business [Baldrige Award] recipient in 2010 and then doing my best to be an ambassador for the Baldrige Program. You have a great deal of experience in the business sector, particularly in the service business industry. How do you see the Baldrige Excellence Framework as valuable to organizations in that sector/industry? The Baldrige framework is the most valuable performance excellence model for any organization in any industry. Our customers benefit from consistency in our products and services. We measure what is important to our customers and the company, identify trends, and use measures to continuously improve. This delights our customers and creates loyalty that allows us to outperform our competitors. Our team members are proud to work for an award-winning organization that focuses on excellence, quality, integrity, and relationships. How do you apply Baldrige principles/concepts to your current work experience, particularly in the organization you lead? K&N Management uses the Baldrige framework to align the actions of the company toward one common goal: to delight each guest that walks into our restaurant. Strategic planning continues to fuel our improvement efforts year after year. As a judge, what are your hopes for the judging process? In other words, as a judge what would you like to tell applicants and potential Baldrige Award applicants about the rigor of the process? It is not rocket science, but it is very rigorous and requires a high level of discipline. You can’t just dabble in it; you have to go "all in." A burning desire for continuous improvement is the key driver for success. What encouragement/advice would you give Baldrige examiners who are reviewing award applications now? What you are doing matters and will benefit our country by improving American organizations as well as stretching you to grow personally and professionally. See other blogs on the 2015 Judges’ Panel: Laura Huston (chair), Dr. Ken Davis, Michael Dockery, Miriam N. Kmetzo, Dr. Sharon L. Muret-Wagstaff, Dr. Mike R. Sather, Dr. Sunil K. Sinha, Dr. John C. Timmerman, Roger M. Triplett, and Fonda L. Vera. Greg Gibson, a candidate for the 2015 panel, pending appointment, will also be interviewed for this series.  
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:08pm</span>
Posted by Dawn Marie Bailey What do the American West, Six Sigma, cowboy ethics, commerce, and quality improvement have to do with one another? They are all elements of the story of Malcolm Baldrige, U.S. Secretary of Commerce under President Ronald Reagan; Baldrige was passionate about the American West and U.S. business. That passion continues to be honored today not only by a national program in his name but by state and sector Baldrige-based programs that spread the Baldrige process to local communities. But it’s Quality New Mexico, which uses as its slogan "the state of quality," that has a very personal connection to the Baldrige family. A recent article by Nigel Hey, "The Story of Mac Baldrige and Quality New Mexico," outlines how Mac Baldrige and his commitment to increase U.S. business productivity and customer satisfaction led to the birth of the Baldrige Performance Excellence Program to support the competitiveness and sustainability of U.S. organizations, and how that national program took root in New Mexico. The story begins with Motorola’s 1981 initiative for a tenfold improvement in quality that included the development of Six Sigma and the manufacturer’s implementation of the Baldrige Criteria for Performance Excellence. "To achieve the quality goal demanded by Six Sigma, Motorola required that suppliers start their own Baldrige-based quality programs," writes Hey. "One such supplier was AT&T, which created an internal Chairman’s Quality Award based strictly on the Baldrige Criteria and required each division to submit a Baldrige application covering its internal quality program." One of AT&T’s suppliers was Sandia National Laboratories, based in Albuquerque, NM. In 1991, at the invitation of U.S. Senator Jeff Bingaman of New Mexico, Motorola’s COO Chris Galvin spoke to business leaders in Las Cruces, NM, explaining that quality was "his company’s main weapon of defense against the onslaught of new foreign competitors." After site visits to Motorola headquarters and to a state Baldrige-based program helping organizations improve in Minnesota, New Mexican business leaders and Senator Bingaman became convinced that a Baldrige-based program in New Mexico could help state organizations stem the tide of lost business to foreign competition and increase job opportunities. In 1993, with the support of Sandia National Laboratories, Quality New Mexico was born, with the vision of turning New Mexico into a quality state. Read the full story here of how Baldrige expanded to New Mexico and the value it brought across the United States.
Blogrige   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:08pm</span>
By Rick Kazman, Visiting Scientist, Research Technology & System Solutions The SEI has long advocated software architecture documentation as a software engineering best practice.  This type of documentation is not particularly revolutionary or different from standard practices in other engineering disciplines. For example, who would build a skyscraper without having an architect draw up plans first?  The specific value of software architecture documentation, however, has never been established empirically. This blog describes a research project we are conducting to measure and understand the value of software architecture documentation on complex software-reliant systems. Our research is creating architectural documentation for a major subsystem of Apache Hadoop, the Hadoop Distributed File System (HDFS).  Hadoop is a software framework used by Amazon, Adobe, Yahoo!, Google, Hulu, Twitter, Facebook, and many other large e-commerce corporations. It supports data-intensive (e.g., petabytes of data) distributed applications with thousands of nodes.  The HDFS is a key piece of infrastructure that supports Hadoop by providing a distributed, high performance, high reliability file system. Although there are two other major components in Hadoop—MapReduce and Hadoop Common—we are initially focusing our efforts on HDFS since it is a manageable size and we have access to two of its lead architects. The HDFS software has virtually no architectural documentation, which expresses strategies and structures for predictably achieving system-wide quality attributes, such as modifiability, performance, availability, and portability. This project has thus become our "living laboratory" where we can change one variable (the existence of architectural documentation) and examine the effects of this change. We have enumerated a number of research hypotheses to test, including: product quality will improve because the fundamental design rules will be made explicit, more users and developers will become contributors and committers to HDFS because it will enable them to more easily learn the framework and thus make useful contributions, and process effectiveness will improve because more developers will be able to understand the system and work independently. We will measure the number of project features before and after the introduction of the documentation, where the "before" state becomes the control for our experiment. We believe the insights gained from this project will be valuable and generalizable because Hadoop exemplifies the types of systems in broad use within the commercial and defense domains.  For example, Facebook depends on Hadoop to manage the huge amount of data shared amongst its users.   Likewise, the DoD and Intelligence Community use Hadoop to leverage large-scale "core farms" for various "processing, exploitation, and dissemination" (PED) missions. If the existence of architectural documentation yields benefits (or not), we can better influence acquisition policies and development practices for related software-reliant systems. I along with my research team—Len Bass, Ipek Ozkaya, Bill Nichols, Bob Stoddard, and Peppo Valetto—have been assisting two of the HDFS’s architects in reconstructing, documenting, and distributing architectural documentation for the system. To do this, we initially employed reverse engineering tools including SonarJ and Lattix, to recover the architecture. This reverse engineering was only partially successful due to limitations with these tools. These tools are designed to help document the modular structure of the system, which crucially influences modifiability. In HDFS, however, performance and availability are the primary concerns and the tools offer no insight into the structures needed to achieve those attributes.  We have therefore undertaken considerable manual architectural reconstruction by interviewing the architects and carefully reading the code.  After we finish developing and distributing the Hadoop HDFS documentation, we will measure the quality of the code base and the nature of the project, including number of defects defect resolution time number of new features number of product downloads size (lines of code, number of code modules) number of contributors and committers These measurements will provide a time-series of snapshots of these measures as a baseline.  We will continue to track these measurements after the introduction of the (shared, publicly available, widely disseminated) architecture documentation to determine how the metrics change over time. We will also conduct qualitative analysis (via questionnaires) to understand how the documentation is being embraced and employed by architects and developers. We will examine the impact of the documentation on the developers’ interactions, specifically how it impacts their social network as represented by their email contributions to project mailing lists and comments made in their issue tracking system (Jira). Finally, we will interview key HDFS developers—both contributors and committers—after the introduction of the architecture documentation to gather some insights on their perspective about the usability and understandability of the HDFS code base. This project is a longitudinal study, which involves repeated observations of the same items over a period of time.  It will take time for the architectural documentation to become known and used, so the metrics we are collecting may not manifest themselves right away. Likewise, after the documentation is distributed, it may take a while for it to be assimilated into the Hadoop developer culture, after which point we will be able to measure whether it has made an impact. Within a year, however, we expect to report on the metrics we gathered, as well as qualitative results from surveys and interviews of HDFS developers. Based on this information we will produce a paper describing our methodology and results from creating the documentation.&lt; Many of the systems that rely on Hadoop are highly complex, with millions of users and emergent behavior. Such systems have been previously characterized as ULS (Ultra Large Scale) systems. We hope our experiment in understanding the consequences of architectural documentation will advance the SEI’s research agenda into ULS systems.  We look forward to hearing about your experiences applying architectural documentation to software-reliant systems. Additional Resources: For more information about the SEI’s architecture documentation methods, please visitwww.sei.cmu.edu/architecture/start/documentation.cfm For more information about the SEI’s work in Ultra Large Scale Systems, please visit www.sei.cmu.edu/uls/index.cfmDownload the SEI technical report, Creating and Using Software Architecture Documentation Using Web-Based Tool Supportwww.sei.cmu.edu/library/abstracts/reports/04tn037.cfm?DCSext.abstractsource=SearchResultsDownload the SEI technical report, Architecture Reconstruction Guidelines, Third Editionwww.sei.cmu.edu/library/abstracts/reports/02tr034.cfmDownload the SEI technical report, Architecture Reconstruction Case Study www.sei.cmu.edu/library/abstracts/reports/03tn008.cfmDownload our research study report, Ultra-Large-Scale Systems: The Software Challenge of the Futurewww.sei.cmu.edu/library/abstracts/books/0978695607.cfm
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:08pm</span>
By Nanette Brown, Senior Member of the Technical StaffResearch, Technology, and System Solutions program Occasionally this blog will highlight different posts from the SEI blogosphere. Today’s post is from the SATURN Network blog by Nanette Brown, a visiting scientist in the SEI’s Research, Technology, and System Solutions program. This post explores Categories of Waste in Lean Principles and Architecture, and takes an in-depth look at three of the eight categories of waste (defects, overproduction, and extra complexity) from the perspective of software development in general and software architecture in particular. Read more…
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:07pm</span>
By Bill Novak, Senior Member of the Technical Staff, SEI Acquisition Support Program, Air Force Team Background: Over the past decade, the U.S. Air Force has asked the SEI’s Acquisition Support Program (ASP)  to conduct a number of Independent Technical Assessments (ITAs) on acquisition programs related to the development of IT systems; communications, command and control; avionics; and electronic warfare systems. This blog post is the first in a series that identifies common themes across acquisition programs that we identified as a result of our ITA work. This post explores the first theme: misaligned incentives, which occur when different individuals, groups, or divisions are rewarded for behaviors that conflict with a common organizational goal.   When performing the ITAs, the Air Force asked us to: Analyze the most frequently recurring acquisition issues and their possible root causes across multiple ITAs Examine the events, trends/patterns, and underlying structures present in the conducted ITAs Identify attributes of projects that are most likely to result in specific issues Provide initial recommendations, where possible, that could address and mitigate the root causes As part of our analysis, we interviewed people that had been on those teams and looked at documents and other information that we had gathered. We then collated all the findings that had come out of those programs to identify common themes and related issues that we observed. Out of that process came many different findings that we categorized into several overarching themes. We also found a series of emerging trends across this sample. While the sample space is specific to the U.S. Air Force, the trends are representative of what we’ve seen across all types of acquisition programs in the Department of Defense (DoD). The First Theme: Misaligned Incentives There is a saying: Individually optimal decisions often lead to collectively inferior solutions. Or, to put it more simply: If everyone acts in what they believe to be their own best interests, as opposed to the group’s best interests, the overall result for the group may be suboptimal—and in some cases, catastrophic. It is often the case in acquisition that individuals face situations where their incentives may not align with broader team objectives.  Likewise, team objectives may not align with broader organizational objectives. One example of this type of problem is the situation where a lengthy program duration further extends the schedule (also known as "Longer Begets Bigger"). While lengthy program duration enables the creation of greater capability, it also incentivizes the use of less-mature technology to avoid obsolescence at deployment. Likewise, it incentivizes requirements scope "creep" due to changing threats and new technologies while the program is in development. Although minimizing the growth of program schedule and cost should ideally be in everyone’s interests, there are conflicting incentives for stakeholders to do just the opposite; ostensibly to deliver a better, more capable system to the warfighter. Another example is the "Bow Wave Effect." In spiral development, there can be an incentive to postpone riskier tasks that were planned for an early spiral (originally intended to reduce risk) to a later spiral, in favor of doing simpler tasks up-front. The easier tasks will show good progress, making the program’s cost and schedule performance look better in the near-term. This deferral strategy increases risk in later spirals, however, by delaying complex development to a future point when there is less flexibility for change and less room in the schedule to complete the work successfully.  The short-term interests of good cost/schedule performance thus often take precedence over the longer-term interests of successful deployment. Misaligned incentives commonly occur in the absence of proper rules that control the rewards or penalties for participants. The underlying principle is that unless the rules incentivize them to do otherwise, people tend to act in their own self-interest. Two common types of misaligned incentives are those in which (1) an individual's interests are traded off against the group's interests and (2) long-term interests are traded off against short-term interests. If some stakeholder goals conflict with program goals, then either contractor self-interest (such as making more money) or Program Management Office (PMO) self-interest (such as making the program last longer) may drive decision-making. Neither situation is in the best overall interest of the program or the DoD. So the question is how can misaligned incentives in acquisition be addressed? The fact is that problems relating to misaligned incentives have been the subject of intensive study in many different fields ranging from social psychology, game theory, and behavioral economics. Many approaches to resolving specific types of incentive problems have been developed, both by using new theory and by identifying past strategies that have been used in different domains to successfully deal with these issues. When, as participants in an acquisition program, we find ourselves facing instances of misaligned incentives—and there are many—the goal is to try to align them. Not all incentives, however, are within our "sphere of influence" as engineers and managers. Some are inherent in the governance (i.e., the policies and regulations) that we operate within and can't be changed easily. When this is the case, one of the best ways to mitigate the consequences is simply to recognize their existence. Knowing what lies ahead allows managers to make a compelling case for considering workarounds and other alternative options. Ultimately, however, if misaligned incentives are not addressed by either the PMO, its parent organization the Program Executive Office (PEO), or the DoD, it can lead to such situations as PMOs that may support  the continuation of high-risk, poorly progressing programs due to possible impacts on incomes and careers "Cost-Plus" contracts that encourage longer programs because it means more revenue to the contractor Users demanding non-essential requirements and capabilities because they bear little cost for doing so Reviewing the alignment of the incentives acting on an acquisition program can be revealing. Work done by the SEI in Acquisition Archetypes illustrates some of these issues, and recommends some approaches for addressing them. Understanding the incentives will expose opportunities for improving governance by changing the rewards to bring the goals of the various parties into better alignment, and reduce conflict among stakeholder groups.   We don’t have enough space to discuss specific solutions to misaligned incentives in this posting, but will delve into this topic in future postings.  Future postings will also report on research we are conducting that combines characterizations of types of misaligned incentives, such as "social dilemmas," with complex system modeling. This combination will align the incentives so acquisition staff can make decisions that produce better program outcomes. This is the first post in an ongoing series examining themes in acquisition. Other themes that will be explored in the series include the need to sell the program, the evolution of science projects, and the move towards common infrastructure and joint programs New installments in the series will be published over the next several months on the SEI blog. Additional Resources: Check back for a forthcoming SEI Special Report An Analysis of Recurring Issues Found Across 12 U.S. Air Force Software-Reliant Acquisition Programs.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:07pm</span>
By Robert Ferguson Senior Member of the Technical Staff Software Engineering Process Management Program The Government Accountability Office (GAO) has frequently cited poor cost estimation as one of the reasons for cost overrun problems in acquisition programs. Software is often a major culprit. One study on cost estimation by the Naval Postgraduate School found a 34 percent median value increase of software size over the estimate.  Cost overruns lead to painful Congressional scrutiny, and an overrun in one program often leads to the depletion of funds from another.  This post, the first in a series on improving the accuracy of early cost estimates, describes challenges we have observed trying to accurately estimate software effort and cost in Department of Defense (DOD) acquisition programs, as well as other product development organizations. Periodically, the SEI is called to review a program’s software estimate usually because two independently generated estimates are far apart - sometimes by a factor of 10 or more.  Such disparate results will not pass any of the official milestone reviews and can delay program startup by several months. The frequency of this problem increased with 2008 changes in acquisition regulations by the DOD that require a full life-cycle cost estimate for review at Milestone A, which occurs at the end of the Material Solution Analysis Phase (For more information about Milestone A, see the Integrated Defense Life Cycle Chart for a picture and references in the "Article Library"). Formal acceptance of the Milestone A review is signified by the Acquisition Decision Memorandum (ADM). The ADM is required by law in order to issue a Request for Proposal (RFP) to contract for the Technology Development Phase(TDP). Before describing our approach, which we will do in the second post in this series, it’s important to understand and evaluate the traditional methods of preparing estimates for acquisition programs. Typically, estimators review the available program information, which includes the following documents: An Analysis of Alternatives (AOA) describing the proposed solution. An Initial Capabilities Document (ICD) and various strategy documents supporting a RFP. Preliminary plans for systems engineering, test, and evaluation and similar early planning documents. Estimators then seek out available cost estimation relationships (CERs) and data from past programs.  The estimators must determine which analogies make the most sense.  They then apply their expert judgment to prepare a single "most likely" value for size, cost and schedule for each major subsystem, which is often called a "point estimate." After computing a point estimate, the estimators then add a range, say +/- 25 percent, to the point estimate to account for future changes.  These estimates and the background information are delivered to the service (e.g., Army, Navy, Air Force) cost estimation center and the Cost Assessment & Performance Evaluation (CAPE) office for independent review.  The various estimates and plans become the content for review by the Milestone Decision Authority (MDA), which determines readiness for the TDP and issues the ADM. Reviewers of the estimate will make a careful examination of the assumptions made by the estimators. Consequently, the program estimate must reflect possibilities for future program change and must provide ranges for possible costs and schedule duration.  Potential changes in technology, program structure, mission, and contract must be considered simultaneously.  If the estimates are not reasonably close, the MDA is not likely to approve. In our experience, estimators and reviewers of estimates who use traditional methods of cost estimation are presented with the following challenges because the nature of the information available prior to Milestone A does not correspond well to the input required for these methods: The program will not have a detailed requirements document, making it hard to estimate scale or size.  Staff may not know the specific technologies that will be needed, meaning they may not know the tasks required and the number of trade studies needed. The program manager does not know who will perform the development work, so productivity cannot be estimated accurately. Estimators may not know what skills will be required, which makes it hard to determine staff training requirements. Our goal is to develop a method that overcomes the challenges with traditional estimation methods by identifying and expressing the potential changes and ranges of estimating inputs in terms of a probability model. Automation is a key factor in the model so that re-estimating can be quickly performed as the program discovers which changes become realities and which can be ignored. My next posting will describe research that the SEI’s Software Engineering Measurement & Analysis initiative is conducting to improve the accuracy of early estimates (whether it’s a DOD acquisition program or commercial product development work) and ease the burden of additional re-estimations during the program lifecycle. Additional Resources: For more information about the Software Engineering Measurement & Analysis Initiative, please visitwww.sei.cmu.edu/measurement/index.cfm
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:07pm</span>
By Bill Novak, Senior Member of the Technical Staff, SEI Acquisition Support Program, Air Force Team Background: The U.S. Air Force has sponsored a number of SEI Independent Technical Assessments (ITAs) on acquisition programs that operated between 2006 and 2009. The programs focused on the development of IT systems, communications, command and control, avionics, and electronic warfare systems. This blog post is the second in a series that identifies four themes across acquisition programs that the SEI identified as a result of our ITA work. Other themes explored in the series include misaligned incentives, the evolution of science projects, and common infrastructure and joint programs. This post explores a related second theme, the need to sell the program, which describes a situation in which people involved with acquisition programs have strong incentives to "sell" those programs to their management, sponsors, and other stakeholders so that they can obtain funding, get them off the ground, and keep them sold. The Second Theme: The Need to Sell the Program Many studies have noted that defense suppliers are consolidating (becoming larger and fewer), and that future DoD budgets will likely shrink. To use scarce resources more effectively, DoD acquisition programs are increasingly combining multiple capabilities into single systems, meaning that there are fewer programs for which the smaller number of defense suppliers must compete. The consequence is that acquisition program awards become "must-win" competitions for those suppliers. In such an environment, it becomes critically important for acquisition program participants to "sell the program" to their management, sponsors, and other stakeholders so that they can obtain and keep their funding. A recent study on reforming defense acquisition found that in many situations "the acquisition culture has become an environment that promotes ‘selling’ programs and includes behavior fraught with unfounded optimism and parochialism." Such an environment creates incentives for acquisition program management and staff to sell the program and keep it sold by exaggerating the value of the system to the user or warfighter to raise its perceived importance underestimating the system’s cost to make the price tag more palatable defining ambitious requirements that promise substantial jumps in capability, to increase the system’s attractiveness to stakeholders impact its viability downplaying adverse information about the program or system delaying risky or complex tasks that could cause poor results or failure deferring longer-term investments (such as sustainment planning) that are critical, but provide no visible near-term "selling point" for the system minimizing real-world test and demonstration activities (such as comprehensive operational tests) that might reveal issues and using more advanced, and (unfortunately) sometimes less mature, technology to promise superior system capability. So why is the SEI looking at these issues? Because the role of software in defense programs has increased dramatically and continues to rise, and thus complicates these issues. For example, the complexity of software estimation actually promotes underestimation, since it increases the inherent uncertainty of the cost.  Also, the ease of deploying upgraded software to already fielded systems to improve capability is very attractive—but requires even more up-front sustainment planning, rather than less. Contractors also often have a vested interest in "selling the program" through underbidding and other activities. If the program is either not awarded, or cancelled, there is no income. As the GAO noted last year, there are "…prevailing pressures to force programs to compete for funds by exaggerating achievable capabilities, underestimating costs, and assuming optimistic delivery dates." Our concern is that in situations when all these incentives come together, the annual funding process creates a competition in which "success" is measured more in terms of the ability to obtain the full amount of the next year’s funding, rather than delivering promised capabilities on time to the warfighter. In short, the goal becomes maintaining the perception of high value and good progress for as long as possible. The resulting push to "sell" programs is a root cause for many of the recurring acquisition problems that we see in our work, including acquisition programs that run over schedule and budget. We can see how certain actions, such as underestimating costs and overpromising results, directly lead to schedule pressure due to the cost of development staff.  Likewise, the use of less mature technology raises risk, which frequently leads to schedule and cost overruns. When taken together, these incentives all push programs in the same direction: running over cost and schedule, underperforming, reducing functionality, and diminishing quality. The existence of an incentive to follow a particular course of action does not necessarily mean that the incentive will be successful in producing that behavior. Our goal at the SEI is to gain a better understanding of this problem so that incentives in acquisition can be aligned in such a way that they benefit the individual and the group, as well as the government and country. Acquisition program management and staff should not be forced to choose between what’s best for them and their program versus what may be best for their service or country. When faced with these kinds of misaligned incentives, most people believe they have the integrity to "do the right thing." When implicit incentives are built into the acquisition system, however, they encourage the acquisition community to overstate value, understate costs, use immature technology, and minimize risks and problems. When these incentives exist, there is often pressure to act accordingly, despite the best of intentions by participants in the process. As we look toward addressing this particular aspect of misaligned incentives, we face the same challenges that we laid out in the blog entry on the first overarching theme in acquisition. We need to start addressing each of these implicit incentives in the acquisition system. We begin by shining a light on them and recognizing them as counter-productive influences that often weaken—not strengthen—DoD readiness and effectiveness. The next step is to help inform the acquisition community about these issues, show them how they occur, and equip them with a toolkit of methods to counteract them to better serve warfighters and taxpayers. New research at the SEI is applying analysis and modeling tools to characterize acquisition behaviors and assess the effectiveness of different techniques for aligning incentives. We are developing interactive exercises that can be used in both classroom and eLearning contexts to help the acquisition community find ways of better managing the development of our essential defense systems. This is the second in an ongoing series examining themes in software-reliant acquisition. New installments in the series will be published over the next several months on the SEI blog, where a new post is published every Monday morning. Additional Resources:Check back for a forthcoming SEI Special Report An Analysis of Recurring Issues Found Across 12 U.S. Air Force Software-Reliant Acquisition Programs.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:05pm</span>
By Nanette Brown, Senior Member of the Technical StaffResearch, Technology, and System Solutions program Occasionally this blog will highlight different posts from the SEI blogosphere. Today’s post is from the SATURN Network blog by Nanette Brown, a visiting scientist in the SEI’s Research, Technology, and System Solutions program. This post, the second in a series on lean principles and architecture, takes an in-depth look at the waste of waiting and how it is an important aspect of the economics of architecture decision making. To read more...
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:04pm</span>
By Robert Ferguson Senior Member of the Technical Staff Software Engineering Process Management Program The Government Accountability Office (GAO) has frequently cited poor cost estimation as one of the reasons for cost overrun problems in acquisition programs. Software is often a major culprit. One study on cost estimation by the Naval Postgraduate School found a 34 percent median value increase of software size over the estimate.  Cost overruns lead to painful Congressional scrutiny, and an overrun in one program often cascades and leads to the depletion of funds from others. The challenges encountered in estimating software cost were described in the first post of this two-part series on improving the accuracy of early cost estimates.  This post describes new tools and methods we are developing at the SEI to help cost estimation experts get the right information they need into a familiar and usable form for producing high quality cost estimates early in the life cycle. To help overcome the fact that the data available early in a program lifecycle does not correspond to the input data required for most cost estimation models, our method performs the following steps Identify program execution change drivers (referred to simply as "drivers") that are specific to the program Identify an efficient set of scenarios representing combinations of the driver states Develop a probability model (e.g., a Bayesian Belief Network (BBN)) depicting the cascading nature of the drivers Supplement traditional use of analogy with the BBN to predict the uncertainty of inputs to traditional cost models for each scenario Use Monte Carlo simulation to compute a scenario cost estimate based on the uncertain inputs from the previous step Use Monte Carlo simulation to consolidate the set of scenario cost estimates into a single, final cost estimate for the program The remainder of this post describes each step in more detail. In Step 1 we facilitate a short workshop with various program domain experts to identify how cost is affected by possible drivers, such as changes in program sponsorship or changes in supplier relationships.  Workshop participants first select a "nominal" state of each driver, along with one or more possible alternate states.  Notionally, the alternate states represent future conditions of each driver that will likely impact the program cost.  We selected the Navy-AF Probability-Of-Program-Success (POPS) criteria as a straw-man to kick-start this workshop and expedite the discussion of possible drivers for their program.  POPS contains seventeen categories, each with multiple decision criteria, which are mostly related to program management. We extended the straw-man with several other technical drivers, such as capability based analysis, capability definition, and systems design.  After the drivers and driver states are fully identified, workshop participants subjectively evaluate the probability that each driver state will occur in the future.  To avoid the common pitfalls of eliciting expert judgment of probabilities, we leverage recent published work on the calibration of expert judgment based on the book "How to Measure Anything" by Douglas Hubbard.  Step 2 is based upon techniques for scenario planning.  As Lindgren and Bandhold describe in their book, Scenario Planning - Revised and Updated Edition: The Link Between Future and Strategy, scenario planning has been extensively studied as a means to analyze and manage uncertainty in product development and support strategic planning.  In our use of scenario planning, a scenario consists of the combination of one or more drivers, each in a specific state.   A nominal scenario may therefore be cast as all of the drivers set to their nominal states.   A separate scenario may be cast as a small subset of the drivers, each set to one of their alternate states.   The combinations of drivers and their states can clearly explode as a combinatorial problem.   We employ a driver relationship matrix to subjectively ascertain the most likely cascading situations among the drivers, along with optional use of an orthogonal array (which is a statistical technique that allows a specific sample of scenarios to be evaluated thereby producing a model to explain all remaining scenarios) to produce a representative and efficient set of scenarios to guide cost estimation. In Step 3 we construct a Bayesian Belief Network (BBN), which is a probabilistic model that dynamically represents the drivers and their relationships as envisioned by the program domain experts.   Although initially populated with quantified relationships from the previous driver relationship matrix, this model may be further refined through analysis of quantitative data of the drivers from historical program completions.  Such refinement produces BBN-modeled drivers that go beyond simple binary states of nominal versus non-nominal, to drivers modeled with all of their detailed alternate states and the quantified relationships between the alternate states of different drivers.  This latter approach provides much greater modeled information and more accurate cost estimates compared with traditional statistical regression approaches that only cover a small fraction of the driver state combinations arbitrarily decided in advance.   With BBN’s, the driver states may be flexibly modeled to produce cost estimates, irrespective of which driver state combinations and data are available. In Step 4 the experts examine each scenario and apply their knowledge of past programs to select relevant program and/or component analogies and associated Cost Estimation Relationships (CERs), which are empirical formulas predicting cost based on domain-specific attributes researched over decades of DoD program data.  An example of a CER would be an Air Force CER which predicts cost and labor hours of aircraft development using factors such as aircraft quantity, maximum speed of the aircraft and aircraft weight.  Estimators may identify as many as two dozen CERs for use in the different components and subsystems for a given scenario.  After identifying the analogies and associated CERs, the workshop participants then use the BBN to compute uncertainty distributions for the input factors to the cost estimation model CER’s for each scenario.  The benefit of this approach is that the explicit knowledge of uncertainty of the CER input factors is overtly documented, rather than guessing a single value and trying to discern the final cost estimate answer. Step 5 uses traditional cost models in a Monte Carlo simulation, in which hundreds of thousands of "what-if" hypotheses are calculated using the various uncertain input factor values to estimate cost for a given scenario.   As a result, each scenario will then have a computed cost estimate in the form of a distribution.   This approach provides a confident range of behavior expected for the cost estimate, rather than a single "guestimate" point value.   The immediate benefit is a cost estimate with defined upside and downside uncertainty. Lastly, in Step 6, the set of scenarios and their corresponding cost estimate distributions are consolidated into a single cost estimate distribution using another Monte Carlo simulation.   From this distribution, statements of estimated cost at different confidence levels may then be derived. Our research thus far has focused on evaluating various steps of the method for practicality and effectiveness.  Through an industry pilot of Steps 1 and 2, we examined the typical rich set of possible drivers and their states, along with the combinatorial explosion of the possible scenarios. This pilot enabled us to refine our use of the driver relationship matrix and statistical orthogonal arrays to control the explosion. A subsequent workshop with representatives of the SEI Acquisition Support Program (ASP) enabled us to explore a possible common set of program execution drivers applicable to DoD programs, in addition to evaluating Steps 1-3.   This workshop also enabled us to build an initial BBN model that will be applied in future DoD pilots and become an impetus for more detailed DoD characterization and data collection of program execution drivers. We’ve made consistent progress at each stage of the research thus far and reactions from participants have been positive. Our goal of cost estimate "transparency" mentioned in the first post is certainly being realized.  Recent comments from service cost center staff confirm that the detailed discussion of program execution change drivers and scenarios provides far greater insight to the cost estimate and would significantly assist those who need to review the cost estimates.  Further work must be done to evaluate the accuracy of the resulting estimates and impact of the estimating process.  We’ll keep you posted. Additional Resources:For more information about the Software Engineering Measurement & Analysis Initiative, please visitwww.sei.cmu.edu/measurement/index.cfm
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:04pm</span>
By Grace Lewis, Senior Member of the Technical StaffResearch, Technology, and System Solutions Program The Department of Defense (DoD) is increasingly interested in having soldiers carry handheld mobile computing devices to support their mission needs. Soldiers can use handheld devices to help with various tasks, such as speech and image recognition, natural language processing, decision-making and mission planning. Three challenges, however, present obstacles to achieving these capabilities. The first challenge is that mobile devices offer less computational power than a conventional desktop or server computer. A second challenge is that computation-intensive tasks, such as image recognition or even global positioning system (GPS), take a heavy toll on battery power. The third challenge is dealing with unreliable networks and bandwidth. This post explores our research to overcome these challenges by using cloudlets, which are localized, lightweight servers running one or more virtual machines (VMs) on which soldiers can offload expensive computations from their handheld mobile devices, thereby providing greater processing capacity and helping conserve battery power. This leverage of external resources to augment the capabilities of resource-limited mobile devices is a technique commonly known as cyber-foraging.  The use of VM technology provides greater flexibility in the type and platform of applications and also reduces setup and administration time, which is critical for systems at the tactical edge.  The term tactical edge refers to systems used by soldiers or first responders that are close to a mission or emergency executing in environments characterized by limited resources in terms of computation, power and network bandwidth, as well as changes in the status of the mission or emergency. Cloudlets are located within proximity of handheld devices that use them, thereby decreasing latency by using a single-hop network and potentially lowering battery consumption by using WiFi instead of broadband wireless which consumes more energy. For example, a cloudlet might run in a Tactical Operations Center (TOC) or a Humvee. From a security perspective, cloudlets can use WiFi networks to take advantage of existing security policies, including access from only specific handheld devices and encryption techniques. Related work on offloading computation to conserve battery power in mobile devices relies on the conventional Internet or environments that tightly couple applications running on handheld devices and servers on which computations are offloaded.  In contrast, cloudlets decouple mobile applications from the servers. Each mobile app has a client portion and an application overlay corresponding to the computation-intensive code invoked by the client. On execution, the overlay is sent to the cloudlet and applied to one of the virtual machines running in the cloudlet, which is called dynamic VM synthesis. The application overlay is pre-generated by calculating the difference between a base VM and the base VM with the computation-intensive code installed. The only coupling that exists between the mobile app and the cloudlet is that the same version of the VM software on which the overlay was created must be used. Since no application-specific software is installed on the server, there is no need to synchronize release cycles between the client and server portions of apps, which simplifies the deployment and configuration management of apps in the field. Dynamic VM synthesis is particularly useful in tactical environments characterized by unreliable networks and bandwidth, unplanned loss of cyber foraging platforms, and a need for rapid deployment. For example, imagine a scenario where a soldier needs to execute a computation-intensive app configured to work with cloudlets. At runtime, the app discovers a nearby cloudlet located on a Humvee and offloads the computation-intensive portion of code to it. Due to enemy attacks, network connectivity, or exhaustion of energy sources on the cloudlet, however, the mobile app is disconnected from the cloudlet. The mobile app can then locate a different cloudlet (e.g., in a TOC) and—due to dynamic VM synthesis—can have the app running in a short amount of time, with no need for any configuration on the app or the cloudlet. This flexibility enables the use of whatever resources become opportunistically available, as well as replacement of lost cyber-foraging resources and dynamic customization of newly-acquired cyber foraging resources. As part of our research, we are focusing on face recognition applications. Thus far we have created an Android-based facial recognition app that performs the following actions: It locates a cloudlet via a discovery protocol It sends the application overlay to the cloudlet where dynamic VM synthesis is performed. It captures images and sends them to the facial recognition server code that now resides in the cloudlet. The application overlay is a facial recognition server written in C++ that processes images from a client for training or recognition purposes. When in recognition mode, it returns coordinates for the faces it recognizes, as well as a measure of confidence. The first version of the cloudlet is a simple HTTP server that receives the application overlay from the client, decrypts the overlay, decompresses the overlay, and performs VM synthesis to dynamically set up the cloudlet. The first phase of our work has focused on creating the cloudlet prototype described above. In the second phase we will conduct measurements to see if computations in a cloudlet provide significant reductions in device battery power. In addition, we will gather measurements related to bandwidth consumption of overlay transfer and VM synthesis to focus on optimization of cloudlet setup time. Assuming we are successful, our third phase will create a cloudlet in the RTSS Concept Lab to explore other ways to take computation to the tactical edge. As part of our research, we are collaborating with Mahadev Satyanarayanan, the creator of the cloudlet concept and a faculty member at Carnegie Mellon University’s School of Computer Science. We will be blogging about the progress of our research in future posts. Additional Resources: To read more about the cloud computing research conducted by the SEI’s System of Systems team, please visit www.sei.cmu.edu/sos/research/cloudcomputing/ To view an SEI Webinar on cloud computing, please visit www.sei.cmu.edu/library/abstracts/webinars/Cloud-Computing.cfm
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:03pm</span>
By Donald Firesmith,Senior Member of the Technical Staff Acquisition Support Program In our research and acquisition work on commercial and Department of Defense (DoD) programs ranging from relatively simple two-tier data-processing applications to large-scale multi-tier weapons systems , one of the primary problems that we see repeatedly is that requirements engineers tend to focus almost exclusively on functional requirements and largely ignore the so-called nonfunctional requirements, such as data, interface, and quality requirements, as well as technical constraints. Unfortunately, this myopia means that requirements engineers overlook critically important, architecturally-significant, quality requirements that specify minimum acceptable amounts of qualities, such as availability, interoperability, performance, portability, reliability, safety, security, and usability. This blog post is the first in a series that explores the engineering of safety- and security-related requirements. Quality requirements are essential to a system’s architecture and its acceptability by stakeholders. There are several reasons, however, why quality requirements are rarely well specified. Functional requirements are central to how stakeholders tend to think about the system (i.e., what functions the systems performs for its users). Popular requirements engineering techniques, such as use case modeling, are effective for identifying and analyzing functional requirements. Unfortunately, these techniques are inadequate and inappropriate for non-functional requirements, which include quality requirements, as well as interface requirements, data requirements, and architecture/design constraints. By specifying how well the system performs its functions, quality requirements logically follow functional requirements. Most acquisition programs do not explicitly use quality models, which are used to define the different types of quality, their units of measure, and associated metrics (e.g., as defined in ISO/IEC 9126-1 Software Engineering - Product Quality - Quality Model). Without relatively complete quality models, stakeholders and developers are often unaware of—and tend to overlook—the many types of quality. It is also hard for many stakeholders to specify the required level of these qualities. Requirements engineers rarely receive any training in identifying and specifying quality requirements and thus have far less experience engineering them because they are often considered the responsibility of specialty engineering groups, such as reliability, safety, security, and usability (human factors). While many types of quality requirements are important, safety and security requirements are two of the most vital; almost all major commercial and DoD systems have significant safety and security ramifications, and many are safety- and security-critical. It is far better to build safety and security into a system than to add them once a system’s architecture has been completed, much less after the system exists and has been fielded. Yet system requirements rarely specify how safe and secure a system must be to adequately defend itself and its associated assets (people, property, the environment, and services) from harm. Far too often, requirements do not specify what accidents and attacks must be prevented, what types of vulnerabilities the system must not incorporate, what hazards and threats it must defend against, and what the maximum acceptable safety and security risks are. How big is this problem? On production projects, poor requirements lead to budget and schedule overruns, missed or incorrectly implemented functionality, and systems that are delivered but never used.  According to Nancy Leveson, a respected expert in software safety, up to 90 percent of all accidents are caused, at least in part, by poor requirements. For example, conventional requirement documents typically do not specify what systems should do in unlikely situations when valuable assets are harmed accidents and attacks occur system internal vulnerabilities exist system external abusers exploit these vulnerabilities safety hazards and security threats exist and safety and security risks are high Requirements also rarely specify that the system must detect when these safety- and security-related events occur or conditions exist. Similarly, the requirements often don’t specify what the system must do when it detects them. To summarize, requirements typically do not adequately specify the safety and security-related problems the system must prevent, the system’s detection of these problems, and how the system must react to their detection. The next post in this series will discuss the obstacles that acquisition and development organizations face when engineering safety- and security-related requirements. Our final post in the series will present a collaborative method for engineering these requirements based on a common ontology of the concepts underlying safety and security a clear model (including proper definitions) of the different types of safety- and security-related requirements and a shared set of safety and security analysis techniques that is useful to engineers from both disciplines.   Additional Resources: To view a tutorial on engineering safety-and security-related requirements for software-intensive systems, please visit www.sei.cmu.edu/library/abstracts/presentations/icse-2010-tutorial-firesmith.cfm To read an SEI technical note on the common concepts underlying safety, security, and survivability engineering, please visitwww.sei.cmu.edu/library/abstracts/reports/03tn033.cfm To read an SEI technical report on the Security Quality Requirements Engineering (SQUARE) method for engineering security requirements, please visit www.sei.cmu.edu/library/abstracts/reports/05tr009.cfm
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 03:02pm</span>
By Bill Novak, Senior Member of the Technical Staff, SEI Acquisition Support Program, Air Force Team Background: Over the past decade, the U.S. Air Force has asked the SEI’s Acquisition Support Program (ASP)  to conduct a number of Independent Technical Assessments (ITAs) on acquisition programs related to the development of IT systems, communications, command and control, avionics, and electronic warfare systems. This blog post is the third in a series that enumerates common themes across acquisition programs that we identified as a result of our ITA work. Other themes explored in this series include misaligned incentives, the need to sell the program, and common infrastructure and joint programs. This post explores the third theme in this series, the evolution of "science projects," which describes how prototype projects that unexpectedly grow in size and scope during development often have difficulty transitioning into a formal acquisition program. The Third Theme: The Evolution of "Science Projects" A growing theme in acquisition is the increasing prevalence of programs that are often initially called "science projects," and the difficulties associated with evolving such efforts into larger systems and more formal acquisition efforts. The name "science project" refers to a program that starts out small—often as an experimental development or prototype system—so that it can clarify user requirements better understand the problem produce a compelling "proof of concept" to help solicit funding  and ‘sell’ the program The recurring dynamic of science projects has received little discussion in acquisition literature, even though the pattern is increasingly common. Due in part to urgent demands for new technologies in theaters such as Iraq and Afghanistan, a quarter of the programs that we assessed for the Air Force study mentioned above could be characterized as having begun as science projects. Many defense programs that started out as science projects ultimately produced important advances in technology that leapfrogged our adversaries and gave American warfighters a critical edge in conflicts around the world. In many cases the ability to quickly develop and deploy a new technology has been key to adapting effectively to rapidly changing conditions and threats. The question is not whether we need science projects—but how we can most effectively—and sustainably—move the technology from a prototype into the hands of the user or warfighter. Science projects are often initiated—and even frequently managed—by their user community, who are often experts in the relevant domain area but lack expertise in software engineering and management. Some projects may be organized as less formal "in-house" developments, while others are begun using contractors who specialize in building advanced prototype systems. Still others are the product of laboratories or Advanced Concept Technology Demonstration (ACTD) efforts. In any case, science projects often evolve organically without the structure or investment in design that should be put into building a large and mission-critical software-reliant system. These short-cuts sometimes occur because there isn’t yet adequate demand for the capability to justify a formal program. Other times there has been a conscious decision to try to develop it as quickly as possible by "flying under the radar" and avoiding the bureaucratic overhead and cost of the formal acquisition process. A potential downside with science projects, however, is that these systems can become the victims of their own success. The natural and laudable instinct of commanders to provide valuable new capabilities to their warfighters in the field as quickly as possible can become a threat to the project.  The initial prototypes of science projects often grow incrementally without a guiding vision or architecture and are tested in the field, often with very positive initial reactions to the system’s new capabilities. Once the value of the system’s capability is recognized by warfighters and other users, demand increases quickly, and the warfighters may soon become unwilling to part with these new capabilities. The systems have already incorporated many new features in response to warfighter needs, with the code base becoming increasingly hard to maintain, and bugs being injected as each change is made or new feature is added. Documentation—rarely a strong point of most prototype development efforts—is no longer an option. In such an environment, science projects are forced to evolve rapidly into full-scale acquisition programs that can reliably deliver a robust production system growth spurt that was neither anticipated nor properly planned for in many cases.  It is at this point that many science projects "hit the wall" with a large, mostly undocumented, convoluted, defect-laden, and unmaintainable code base, unable to make progress at their early development rates. Almost any change made to such a system now will have multiple adverse unintended consequences on performance and robustness. The software—like the plumbing in an old house—can only keep working for so long before the entire system must be scrapped and re-implemented. Sound software engineering practice dictates that when a prototype’s objectives have been met, it shouldn’t be fielded as an operational system directly. Although many aspects of the prototype can and should be reused, systematic development and quality assurance methods are still needed. This evolution can take many forms, including agile approaches, but all such methods still require software expertise and rigor. Instead, what can happen is that a science project tries to transform its prototype into a full-scale system built on top of an unsound foundation.  The system will then often experience problems with robustness, capability, performance, and usability, among other issues. As major new capabilities must be added to the prototype to satisfy the still-growing user demand, the project’s managers see the shortcomings of its evolved design and would like to pause development to re-architect it. Insistent user demand, however, won’t allow the project to "go dark" for months—much less years—to do work that will produce little in the way of visible new features for the end users.  Even if the government were still inclined to discard the prototype and start new development from scratch, the contractor has an incentive to persuade the government to reuse the prototype software as the platform for the new work, thereby making the contractor more indispensable to the future development effort. When the technical aspects encounter difficulties, the management aspects do as well. The project infrastructure (the project team size, its processes, the level of software development and management experience, etc.) that may have been adequate for building a prototype can be inadequate for developing a formal production system intended for wide deployment. These mismatches can mean that most aspects of the project are now inappropriate for their new purposes and must either be discarded or substantially changed and expanded. We’re seeing science projects morphing awkwardly and ineffectively into convoluted production systems more frequently due to the desire to deploy new capabilities to the field that demand new technologies at an ever faster pace. Science projects don’t have to "hit the wall," as long as the needs to scale up both the system design and project organization are recognized and acted upon. Unfortunately, in an environment of scarce funding and chronic schedule pressure, there is a strong temptation to continue development on the same code foundation, with the original project infrastructure, in the often mistaken hope that it will suffice. We collectively need to do a better job of working with science projects to help them be more successful in "crossing the chasm" from prototype development efforts to formal acquisition programs. The challenge is to help acquisition staff identify their situation early on, understand the issues involved, and recognize and act upon the need to scale up both the organization and system architecture as needed to be able to complete a successful acquisition effort. In the blog post, Enabling Agility by Strategically Managing Architectural Technical Debt, my colleague, Ipek Okzaya, describes a promising new methodological advance that could help to mitigate some of these problems.  Likewise, my colleague Rick Kazman describes strategies for architectural documentation that help make the fundamental design rules of science project software more explicit,  thereby helping developers, testers, and maintainers understand the software and work together more effectively. By better understanding how concepts like "technical debt" and "architectural documentation" can help us to more efficiently manage the evolution of software, and apply methods such as refactoring to restructure our code, we are taking important steps toward mastering "science projects." This is the third post in an ongoing series examining themes in acquisition. The first post explored misaligned incentives. The second post explored the need to sell the program. New installments in the series will be published over the next several months on the SEI blog, where a new post is published every Monday morning.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 02:59pm</span>
By Donald Firesmith, Senior Member of the Technical Staff Acquisition Support Program Background: In our research and acquisition work on commercial and Department of Defense (DoD) programs, ranging from relatively simple two-tier data-processing applications to large-scale multi-tier weapons systems, one of the primary problems that we see repeatedly is that acquisition and development organizations encounter the following three obstacles concerning safety- and security-related requirements: Safety, security, and requirements engineers typically know little about each others’ disciplines. Safety, security and requirements engineers often reside in separate teams that rarely collaborate when engineering safety- and security-related requirements. Safety and security are viewed as specialty engineering disciplines that are not well integrated into the overall systems engineering process until after the architecture is largely finalized. This is the second in a series exploring the engineering of safety- and security-related requirements. The first post in the series explored problems with quality requirements.  This post takes a deeper dive into key obstacles that acquisition and development organizations encounter concerning safety- and security-related requirements. In the third part of this series, we will introduce a collaborative method for engineering these requirements that overcomes the obstacles identified in this blog. The first obstacle is a lack of understanding of each other’s disciplines. The safety, security, and requirements communities each have their own terminology, methods, techniques, models, and documents. They read their own journals and books, and they attend their own conferences. In short, they form separate stovepipes that rarely interact. Safety engineers know how to perform safety (hazard) analysis, security engineers know how to perform security (threat) analysis, and requirements engineers know how to perform requirements analysis. Unfortunately, they are rarely trained in each other’s disciplines. These three communities have independently developed effective techniques and methods for performing their own analyses, but they are largely unaware of each others’ disciplines. In practice, however, safety techniques and methods are often quite appropriate—with little or no modifications—for performing security analyses and vice versa. This lack of awareness limits the options available to members of these communities and frequently leads to duplication (often inconsistent or incomplete duplication) of each others’ work. Requirements engineers have an additional problem. Although they know how to engineer functional requirements, we have seen many projects where they utilize functional decomposition or use-case modeling to the exclusion of all other requirements analysis techniques. While these approaches may work well for functional requirements, they are not effective for engineering quality requirements, such as safety and security, as discussed in the first blog posting. The second obstacle is a lack of close collaboration among these three types of engineers, which is especially detrimental to safety and security; two sides of the same coin. Safety and security engineering are concerned with preventing negative events and conditions. The primary difference between safety and security is that safety deals with unintentional (accidental) negatives, whereas security deals with intentional (malicious) ones. Although safety and security engineers have different, albeit complementary responsibilities, they are not completely independent. Accidents can cause vulnerabilities that can be exploited during an attack. Likewise, attacks can cause vulnerabilities that lead to accidents. For example, an electrical power outage (accident) could cause the failure of a physical access control system (vulnerability), leaving security doors unlocked so that unauthorized people can enter into secured areas (attack). Similarly, safety critical software could be infected by malware (attack), causing it to fail (vulnerability), which is a hazard that can lead to associated accidents. As mentioned above, however, safety and security engineers rarely interact, so each tends not to appreciate what the other does. As safety and security engineers have recognized the need to prevent accidents and attacks, security engineers are starting to claim parts of safety, while safety engineers are beginning to claim parts of security. The lack of collaboration between the two teams can lead to a duplication of tasks. Stepping on the other’s turf often leads to resentment, rather than recognition of the need for, and value of, collaboration. This brings us to our third and final obstacle: the view that safety and security are specialty engineering areas that need not be incorporated into systems engineering until after the system requirements that drive the architecture are defined and the primary architecture decisions have been made.  This omission means that safety and security engineers are rarely empowered to make architectural decisions that ensure that the actual safety or security requirements are met. It is therefore more than just a problem of not working closely together; it’s that safety and security engineers rarely become involved in the system engineering until after key requirements and architecture decisions have been completed, which is too late. This viewpoint causes safety and security engineers to concentrate on fixing architectural weaknesses, rather than specifying the requirements that would prevent these weaknesses from being incorporated into the architecture. Safety and security engineers also tend to apply industry standard controls, such as shielding and interlocks (safety) and encryption and passwords (security) rather than asking themselves how safe and secure the system needs to be. In other words, what are the safety and security-related requirements that should be driving the architecture? It is critical to build safety and security into the system from the beginning because it is very difficult and expensive to add it to an existing architecture afterwards. Given these obstacles, how can we overcome them to properly engineer safety and security requirements?  The answer is by providing safety, security, and requirements engineers with an effective method for collaborating closely when engineering these requirements.  In the third part of this series, we will introduce a collaborative method for engineering these requirements that overcomes the obstacles identified in this blog. Additional Resources: For more information, please visitwww.sei.cmu.edu/library/abstracts/presentations/icse-2010-tutorial-firesmith.cfm
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 02:59pm</span>
Displaying 29137 - 29160 of 43689 total records