Blogs
|
Posted by Harry Hertz, the Baldrige Cheermudgeon
Have you ever pondered this question? I didn’t in a global sense until recently. I’ve had the same experiences you have with having my car hit and then feeling like I did wrong with all the hoops the responsible person’s insurance company made me go through. But I never generalized that situation…until now.
A few recent incidents brought this to light for me. A colleague bought something on-line and the seller shipped the wrong item. When the seller was contacted they required the wrong item be returned before the correct item would be shipped and the return postage refunded. The seller made the mistake. Why didn’t they offer to send a replacement immediately with a return shipping label for the incorrect item?
My car was recently subject to a manufacturer’s recall. I had experienced the problem that triggered the recall. Even though the recall was on the national news, it took another two months until I got the recall notice to bring the car to a dealer. The recall notice described exactly what had happened to me (four times). I made an appointment and brought the car in. The service representative required me to sign a $150 diagnostic fee to be refunded if the problem was triggered by only the recall notice. I told them if the problem was greater it was triggered by the multiple times I experienced the failure. They should have been apologizing to me for the defect, not trying to get money for additional repairs. When I submitted a negative review in the on-line survey that followed the recall repair, I was immediately called by the service manager. He insisted that the approval for a diagnostic charge was necessary to protect them from a liability suit if the problem was something other than the recall item. I explained that if liability were a concern the recall letter should have been issued immediately and not months after the recall was announced. He continued to argue. I politely hung up! Who should have been protected in this situation, the dealer or me?
We are in the process of buying some real estate. After having a signed contract by us and the seller, the seller decided they weren’t interested in selling and were not honoring the terms of the contract. The real estate agents started action to protect their commission if the seller reneged. I was told I could get a lawyer to fight for my interests. I am the customer, but the agents are focused only on their financial interests!
In all these incidents, the "good guy" is made to suffer. For making a purchase that benefits the seller, you are turned into an innocent victim with inappropriate consequences.
In the Leadership Category, the Baldrige Criteria for Performance Excellence ask about creating and balancing value for customers and other stakeholders. In the Customers Category, the Criteria ask about building customer relationships to acquire customers, build market share, and enhance brand image. Do we need to add notes about victims’ rights or how not to victimize your customers and stakeholders?
Think about your own experiences. How often are each of us turned into innocent victims? What about your organization? Do you unintentionally make victims out of some of your customers or stakeholders?
Blogrige
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:36am</span>
|
|
Posted by Christine Schaefer
In 2007, the U.S. Army Armament Research, Development and Engineering Center (ARDEC; PDF profile) became the first U.S. federal organization to receive the prestigious Malcolm Baldrige National Quality Award.
Located at Picatinny Arsenal, New Jersey, ARDEC had been using the Baldrige Criteria for Performance Excellence (part of the Baldrige Excellence Framework) since 1994 on its journey to excellence.
Although ARDEC has had many changes in leadership over the past two decades, it continues to use the Baldrige framework and principles to make improvements and support high performance, according to Joseph (Joe) Brescia, ARDEC’s director for strategic management and process improvement.
Brescia said that at conferences where he has presented on using the Baldrige framework, "the question we always get is, ‘with many changes in leadership, how do we ensure that we have continuity in terms of maintaining momentum and continuous improvement?’ At ARDEC, we’ve always had the perspective that leaders have to be the unequivocal champions of quality. Our focus has always been, ‘What are we going to do next?’ So there’s always a focus on continuous improvement. That’s an enduring principle of [ARDEC] leadership: that you focus on continual improvement."
At both of this month’s regional Baldrige conferences in Nashville, Tennessee, and Denver, Colorado, Brescia and James (Jim) Caiazzo, team leader for the Office of Strategic Management at ARDEC, will be presenting on their organization’s leadership principles, showing how they are linked to the Baldrige Criteria. In a recent phone interview, Brescia and Caiazzo answered questions about their upcoming presentation and their organization. Following are highlights of that interview.
How does the Baldrige framework support your organization’s leadership practices and performance?
"We found that leaders who demonstrate principle-centered leadership more effectively link mission, vision, values, strategy, structure, and systems to foster a culture of continuous improvement based on trust, respect, and empowerment," said Brescia.
"The principal takeaway" for those who attend the ARDEC session at this year’s Baldrige regional conferences, Brescia said, is that the organization’s "development of strong leadership principles is firmly embedded in the Baldrige Criteria and is essential for sustained superior performance."
Caiazzo said, "We’ve combined some of the principles of Baldrige that you find in the [Baldrige Criteria] category entitled ‘Leadership’ with the principles of the U.S. Army and its leadership development program and process."
"Those principles are based on what is called the Department of the Army Doctrine as represented by Field Manual (FM) 6-22. It contains all the principles we feel are important that are inextricably linked to category 1," continued Caiazzo. "For instance, category 1 talks to how important the mission and the vision and the principles and values are; and in our own leadership development program, we emphasize those right up front."
A slide from ARDEC’s upcoming conference presentation depicts the organization’s integration of the Baldrige framework and the U.S. Army’s leadership development principles. Slide provided by ARDEC.
In regard to the Baldrige emphasis on continuous improvement, Caiazzo pointed out that the Army Field Manual 6-22, "Army Leadership, Competent, Confident, and Agile," defines leadership as "the process of influencing people by providing purpose, direction, and motivation to accomplish the mission and improve the organization." Therefore, he said, "the whole principle and concept of improvement is the central theme to our three-tier leadership development courses."
Among other ways that ARDEC has continued to use Baldrige in its improvement efforts, Caiazzo described how the organization this year adapted and used the online Baldrige Criteria-based surveys Are We Making Progress/Are We Making Progress as Leaders? After customizing the questions for the organization, he said surveys were given to both managers and employees to identify improvement opportunities. The organization is now conducting focus groups based on the survey results. And "each of the 20 units within ARDEC is coming up with a plan on ways to improve on gaps identified," said Caiazzo.
What are a few key reasons that organizations in your sector can benefit from using the Baldrige framework?
"We found that the Criteria are applicable to any organization, public or private, large or small," said Brescia. "Successful organizations, wherever they may come from, tend to have great leadership teams, maintain a high-performing workforce, develop and deploy effective business strategies, know their customers as well as their competitors, have very disciplined work processes, and are typically very data- and results-driven. Now, if that sounds familiar, that [is because those elements] represent the seven categories of the Baldrige Criteria. So it really doesn’t matter whether you’re profit-driven, focused on maximizing shareholder value, or like us in the public sector … focused on executing our mission effectively and efficiently: those key seven areas are applicable no matter what your organization’s type or sector."
Added Caiazzo, "Baldrige provides a turnkey solution to looking at an organization with a degree of objectivity as to what’s truly important for accomplishing its own mission."
According to Brescia, another benefit for organizations that adopt the Baldrige Criteria is that "it’s a really good framework for building business acumen within your workforce. And business acumen is one of the key characteristics of great leadership. In other words, understanding how the different facets of your organization work together to deliver outstanding results for the customer is really critical to developing your future leadership. … In the public sector or the private sector, it’s a very beneficial way of building that business acumen in the workforce at every level."
What are a few tips for others about using the Baldrige framework to make improvements and achieve excellence across an organization?
"Step number-one in change management is always for leaders to establish a sense of urgency," said Brescia. "Baldrige is a vehicle for establishing and maintaining transformational change in your organization. The responsibility of great leaders is to align the mission, vision, and values within the organization. Paint the vision of what change looks like and how the Baldrige framework gets you there."
"Use the Baldrige Criteria to provide a common language to discuss improvement," Caiazzo said, "so that everyone is using the same vernacular."
"Make sure you focus on results," said Brescia. "In other words, the way to institutionalize the Baldrige framework is to actually use it to manage the business. That comes down to establishing a formal venue for senior leadership to review results and make changes as required. This way, when you do have changes in leadership, with the venue institutionalized, it doesn’t live and die with the leadership that started it."
Join us at the 2015 Baldrige Regional Conferences to attend this session and many more from 2014 and other Baldrige Award recipients.
Blogrige
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:34am</span>
|
|
Posted by Dawn Marie Bailey
Since its Baldrige Award win in 2009, Mosaic Life Care (formerly called Heartland Health) has continued to be nationally recognized for quality, value, and the patient experience. In 2015, Mosaic Life Care was named to the Truven Health AnalyticsTM 100 Top Hospitals® list, given an "A" rating by The Leapfrog Group, identified as a HealthStrongTM Hospital by IVantage® Health Analytics, and named a "Most Wired" hospital by Hospitals & Health Networks magazine. Based in St. Joseph, Missouri, Mosaic Life Care remains a nonprofit, community-based integrated health system serving the residents of northwest Missouri, northeast Kansas, southeast Nebraska, and southwest Iowa—the region’s largest health system and employer. Since its Baldrige win, it has expanded its geographic reach into Kansas City north.
But it’s the name that is a significant change for the health system, which is making a transition from providing health care to life care. In a virtual interview, I asked Martha Davis, Institute Leader at Mosaic, about the significance of the name change and how Mosaic is transforming health care. She gave me a sneak peek into her presentation for the upcoming Baldrige regional conference in Nashville about the importance of this transformation.
How would you describe the transformation from health care to life care? Why has this been important to your success?
We are an Accountable Care Organization and, as such, have recognized the need to move from a patient to consumer-centric approach. We’ve traditionally built service offerings around the acute and chronic health care needs of patients, but we haven’t always done a good job of engaging consumers to prevent or slow the effects of lifestyle, stress, aging, and other factors that impact one’s long-term health. Consumers are shouldering higher insurance plan deductibles and expect more cost transparency and better care experiences. We are also seeing great opportunities to partner with employers to help them lower cost through improved employee health support. Our life-care model is holistic and focused on health, wellness, and wellbeing.
What are your top tips (e.g., 3 to 5 suggested practices) for using Baldrige to support such a transformation?
We have built solid disciplines around planning and deployment of the Baldrige Excellence Framework, which has been invaluable as we’ve expanded into new geographic areas with very different competitive factors. Some of what we’ve learned follows:
While our strategic priorities have remained the same, we’ve established reasonable but stretch measures given the startup of new offerings and services in our expanded geographic service areas. The linkage between measurement and results is critical during times of transformation and innovation—we have to be discerning about what services resonate with our consumers and which ones don’t, regardless of how great of an idea we think an offering is! Maintaining process focus is also essential to this work—not only to improve efficiency and lower cost, but also to improve the provider and consumer experience.
In our customer focus, we are moving beyond patient satisfaction surveys to gain new insights into what our customers value. During the September session, we will talk about what we’ve learned about customer sacrifice by studying out-of-industry exemplars and adapting those learnings to our strategies. These new insights are helping us to develop life care offerings that engage and activate consumers in their own health and well-being.
We’ve relied heavily on the People Framework we developed as a result of our workforce focus—especially in our new markets and expanded offerings. It is imperative that we hire and onboard the right people, provide the right learning and development opportunities, and provide the right feedback and recognition—whether it is with our leaders, providers, or workforce members. We constantly battle change fatigue and continually look at ways to positively engage our providers and workforce members.
What are a few key reasons that organizations in your sector can benefit from using the Baldrige Excellence Framework?
Two key reasons: first, there is enormous disruption in our industry—huge pressure to lower costs and improve the patient experience. With reduced reimbursement, organizations can’t waste precious resources on non-value-added work. The Baldrige framework is an excellent way to prepare for this disruption and upheaval. Second, a population health focus requires great change in the workforce composition: acute care services must leverage technology and tightly control labor costs; clinics need to do the same and be proactive at keeping patients healthy and out of higher-cost venues; and we need to offer scalable, virtual, and other forms of services to consumers who are mostly healthy. This requires that we rethink the skill sets we will need in the future and how we will attract, prepare, and retain the best workforce members. We like to think of the framework as a huge accelerating factor!
Join us at the 2015 Baldrige Regional Conferences to attend this session and many more from current and former Baldrige Award recipients.
Blogrige
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:33am</span>
|
|
Posted by Harry Hertz, the Baldrige Cheermudgeon
Some of my friends commented that Harry the "Cheermudgeon" was too "mudgeonly" in my most recent Blogrige post. So, I decided it was time to cheer. Here is a cheer to getting older. The topic came to me this past week, when I was given a senior citizen Thursday discount at the local supermarket without asking for it — despite the fact that I don’t look a day over thirty! (I didn’t know the discount even existed until my wife told me.)
Another great joy of getting older is substituting grand-parenting for parenting. This summer we had the pleasure of giving our kids a break and hosting our granddaughters (in shifts with some overlap) for almost three weeks. At the end we were exhausted, but it was awesome! They are three, six, and ten years old and a real change after raising two sons.
Naturally, we were totally focused on exceeding our customers’ expectations! However, we also had the opportunity to utilize three additionally important categories of the Baldrige Excellence Framework: Strategy (category 2), Operations (category 6), and, of course, Results (category 7). A focus on work systems allowed us to consider all that was important, make sure we were prepared for our assignment, and delight our customers! (It also gives me another opportunity to show how work systems can be applied in all "businesses".)
I defined our key work systems as: provision of room and board, entertainment, daily close-out (aka bedtime), and emergency preparedness. We quickly decided that room and board would be an internal work process involving our own staff (my wife and me), entertainment would involve ourselves and external suppliers, daily close-out was also an internal work process, and emergency preparedness would involve us and a key external supplier.
External suppliers for entertainment included several local parks and pools/splash facilities, a large amusement park, a local carousel and puppet theater, a museum, and Wolf Trap National Park for a Disney concert. We contracted much of the entertainment to those best equipped to provide them efficiently and more cost effectively than if we decided to develop in-house resources!
With three children in the house, emergency preparedness comprised prevention — providing a safe "work" environment (our work process) — and also preparing for disasters. Disaster preparation involved having a pediatrician on call 24/7, even though our good Baldrige friend Don Lighter fortunately never had to be notified that he was on speed dial!
And now for the results! I have to admit to being in the early stages of reporting results. We have no trends or comparison data, but we have measured what was important. Customer engagement was high and repeat business is anticipated by our loyal customers. Workforce satisfaction is high, although it dipped during a prolonged three year-old temper tantrum and on a few daily close-outs. Supplier performance was a consistent 9 or 10. We had one "accident" that resulted in the involvement of an unanticipated supplier, a plumber. We had no emergencies and never needed to call on our medical supplier.
Oh, and one possible unintended consequence of our customer engagement might have happened. We might have victimized a key stakeholder — parents who had to re-introduce more stringent customer engagement processes!
I hope you had a good summer and that your work systems performed well!
Blogrige
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:30am</span>
|
|
By Christine Schaefer
Did you ever wonder who are the folks who judge applications for the Malcolm Baldrige National Quality Award? What in their background brought them to this high honor, and what advice they may have for Baldrige Award applicants, potential applicants, and examiners?
For an ongoing series of profiles, we have been interviewing members of the 2015 Judges’ Panel of the Malcolm Baldrige National Quality Award to share individuals’ insights and perspectives on the award process, their experiences, and the Baldrige framework and approach to organizational improvement in general.
The primary role of the Judges’ Panel is to ensure the integrity of the Baldrige Award selection process. Based on a review of results of examiners’ scoring of written applications (the Independent and Consensus Review processes), judges vote on which applicants merit Site Visit Review (the third and final examination stage) to verify and clarify their excellent performance in all seven categories of the Baldrige Criteria for Performance Excellence. The judges also review reports from site visit to recommend to the U.S. Secretary of Commerce which organizations to name as U.S. role models—Baldrige Award recipients. No judge participates in any discussion of an organization for which he/she has a real or perceived conflict of interest. Judges serve for a period of three years.
Following is the interview of Greg Gibson, Ed.D., a first-year judge and the superintendent of Schertz-Cibolo-Universal City Independent School District in Texas.
Dr. Greg Gibson
What experiences led you to the role of Baldrige judge?
I have had the honor, pleasure, and challenge of serving as an examiner, team leader, coach, and board member for Quality Texas Foundation. Most recently, I am working with local community leaders in creating a partnership in leadership development through the use of the Baldrige framework lens. We intend for this to become a model for our state and possibly the nation.
You have a great deal of experience in the education sector. How do you see the Baldrige Excellence Framework as valuable to educational organizations?
The education sector is being inundated with "improvement initiatives" from state and federal government. The reality is that excellence will only come from intrinsic motivation and never extrinsic motivation. The Baldrige framework has laid out a roadmap for excellence for education organizations, without being overly prescriptive. By deeply deploying the core values and principles of the Baldrige framework, "top-down" initiatives become less and less necessary, and excellence becomes more and more probable.
How do you apply Baldrige principles/concepts to your current work experience/employer?
Many of our senior leaders serve as state examiners. All senior leaders receive annual training in Baldrige core values, especially systems perspective, visionary leadership, and management by fact. We are working to counter any negative perception of government and education by demonstrating that excellence can and does exist in government/public education. I am honored and humbled to serve as leader of an organization and a member of a community that has a persistent disquiet with the status quo.
As a judge, what are your hopes for the judging process? In other words, as a judge what would you like to tell applicants and potential Baldrige Award applicants about the rigor of the process?
Every year, I am in awe of the organizations’ applicants from across our nation that excel at performance excellence through the Baldrige framework. Sometimes, when I watch the news and get frustrated at the negative, I will flip over to the NIST website and read about Baldrige Award applicants and winners. It restores my faith in our country every time. This process is rigorous, but the example that you are setting (as an applicant) for the rest of the country cannot be overstated.
What encouragement/advice would you give Baldrige examiners who are evaluating award applicants (preparing for upcoming site visits) now?
The backbone of the Baldrige process is the volunteer examiner. We all owe a debt of gratitude to our examiners. I know this process is stringent and arduous, and I also know that there must be times when you wonder "why am I doing this?" Let me assure you that you are making this country stronger one applicant at a time. I stand in awe of your efforts.
See other blogs on the 2015 Judges’ Panel: Laura Huston, Dr. Ken Davis, Michael Dockery, Miriam N. Kmetzo, Dr. Sharon L. Muret-Wagstaff, Dr. Mike R. Sather, Ken Schiller, Dr. Sunil K. Sinha, Dr. John C. Timmerman, Roger M. Triplett, and Fonda Vera.
Blogrige
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:29am</span>
|
|
By Douglas C. Schmidt, Chief Technology Officer
Happy Memorial Day from all of us here at the SEI. I’d like to take advantage of this special occasion to keep you apprised of some recent technical reports and notes from the SEI. It’s part of an ongoing effort to keep you informed about the latest work of SEI technologists. These reports highlight the latest work of SEI technologists in embedded
systems, cyber security, appraisal requirements for CMMI Version 1.3,
improving the quality and use of data, and software assurance. This post includes a listing of each report, author/s, and links where the published reports can be accessed on the SEI website.
As always, we welcome your feedback on our work.
Trusted Computing in Embedded Systems WorkshopBy Archie Andrews & Jonathan McCune
This report describes the November 2010 Trusted Computing in Embedded Systems Workshop held at Carnegie Mellon University. This workshop brought together various groups concerned with advancing research into improving the trustworthiness in embedded systems. The workshop format provided the opportunity to focus on embedded systems while examining the application of related trust technologies in order to foster collaborative approaches and information exchange in this area. Presentations and discussion addressed the capabilities and limitations of effectively employing trusted hardware-enabled components in embedded systems. This included, but was not restricted to, the following areas: new research and development in enabling trust in embedded systems, methods and techniques for establishing trust in embedded systems, lessons learned from research and development projects on embedded systems security, and gaps in current research. The workshop resulted in identification of gaps in current research and recommendations for potential research directions.PDF Download
Best Practices for National Cyber Security: Building a National Computer Security Incident Management Capability, Version 2.0By John Haller, Samuel A. Merrell, Matthew J. Butkovic, & Bradford J. Willke
As nations recognize that their critical infrastructures have integrated sophisticated information and communications technologies (ICT) to provide greater efficiency and reliability, they quickly realize the need to effectively manage risk arising from the use of these technologies. Establishing a national computer security incident management capability can be an important step in managing that risk. In this document, this capability is referred to as a National Computer Security Incident Response Team (CSIRT), although the specific organizational form may vary among nations. Nations face various challenges when working to strengthen incident management, such as the lack of information providing guidance for establishing a national capability, determining how this capability can support national cyber security, and managing the national incident management capability. This document, first in the Best Practices for National Cyber Security series, provides information that interested organizations and governments can use to develop a national incident management capability. The document explains the need for national incident management and provides strategic goals, enabling goals, and additional resources pertaining to the establishment of National CSIRTs and organizations like them.PDF Download
Appraisal Requirements for CMMI Version 1.3 (ARC, V1.3)By the SCAMPI Upgrade Team
This report, the Appraisal Requirements for CMMI, Version 1.3 (ARC, V1.3), defines the requirements for appraisal methods intended for use with Capability Maturity Model Integration (CMMI) and with the People CMM. The ARC may also be useful when defining appraisals with other reference models. The ARC defines three appraisal classes distinguished by the degree of rigor associated with the application of the method. These classes are intended primarily for people who develop appraisal methods to use with reference models such as those in the CMMI product suite.PDF Download
Issues and Opportunities for Improving the Quality and Use of Data in the Department of DefenseBy Mark Kasunic, David Zubrow, & Erin Harper
The Department of Defense (DoD) is becoming increasingly aware of the importance of data quality to its operations, leading to an interest in methods and techniques that can be used to determine and improve the quality of its data. The Office of the Secretary of Defense for Acquisition, Technology, and Logistics (OSD [AT&L]), Director, Defense Research & Engineering (DDR&E) sponsored a workshop to bring together leading researchers and practitioners to identify opportunities for research focused on data quality, data analysis, and data use. Seventeen papers were accepted for presentation during the workshop. During workshop discussion, participants were asked to identify challenging areas that would address technology gaps and to discuss research ideas that would support future DoD policies and practices. The Software Engineering Institute formed three primary recommendations for areas of further research from the information produced at the workshop. These areas were integrating data from disparate sources, employing provenance analytics, and developing models, methods, and tools that support data quality by design.PDF Download
Software Assurance Curriculum Project Volume III: Master of Software Assurance Course SyllabiBy Nancy R. Mead, Julia H. Allen, Mark A. Ardis, Thomas B. Hilburn, Andrew J. Kornecki, & Richard C. Linger
This report, the third volume in the Software Assurance Curriculum Project sponsored by the U.S. Department of Homeland Security, provides sample syllabi for the nine core courses in the Master of Software Assurance Reference Curriculum.PDF DownloadAdditional Resources:
For the latest SEI technical reports and papers,www.sei.cmu.edu/library/reportspapers.cfm.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:28am</span>
|
|
By Douglas C. Schmidt,Chief Technology Officer
Happy Independence Day from all of us here at the SEI. I’d like to take advantage of this special occasion to keep you apprised of a new technical report from the SEI. It’s part of an ongoing effort to keep you informed about the latest work of SEI technologists. This report highlights the latest work of SEI technologists in the fields of insider threat. This post includes a listing of the report, authors, and links where the published reports can be accessed on the SEI website.
As always, we welcome your feedback on our work.
A Preliminary Model of Insider Theft of Intellectual PropertyPDF Download By Andrew P. Moore, Dawn M. Cappelli, Thomas C. Caron, Eric Shaw, Derrick Spooner, & Randall F. Trzeciak
An Excerpt Since 2002, the CERT® Program at Carnegie Mellon University’s Software Engineering Institute has been gathering and analyzing actual malicious insider incidents, including information technology (IT) sabotage, fraud, theft of confidential or proprietary information, espionage, and potential threats to the critical infrastructure of the United States. Consequences of malicious insider incidents include financial losses, operational impacts, damage to reputation, and harm to individuals. The actions of a single insider have caused damage to organizations ranging from a few lost staff hours to negative publicity and financial damage so extensive that businesses have been forced to lay off employees and even close operations. Furthermore, insider incidents can have repercussions beyond the affected organization, disrupting operations or services critical to a specific sector, or creating serious risks to public safety and national security.
CERT insider threat work, referred to as MERIT (Management and Education of the Risk of Insider Threat), uses the wealth of empirical data collected by CERT to provide an overview of the complexity of insider events for organizations—especially the unintended consequences of policies, practices, technology, efforts to manage insider risk, and organizational culture over time. As part of MERIT, we have been using system dynamics modeling and simulation to better understand and communicate the threat to an organization’s IT systems posed by malicious current or former employees or contractors. Our work began with a collaborative group modeling workshop on insider threat hosted by CERT and facilitated by members of what has evolved into the Security Dynamics Network and the Security Special Interest Group of the System Dynamics Society.
Based on our initial modeling work and our analysis of cases, we have found that different classes of insider crimes exhibit different patterns of problematic behavior and mitigating measures. CERT has found four categories of insider threat cases based on the patterns we have seen in cases identified: IT sabotage, fraud, theft of intellectual property (IP), and national security espionage. We believe that modeling these types of crimes separately can be more illuminating than modeling the insider threat problem as a whole. In this paper, we focus on theft of IP.
We define insider theft of IP as crimes in which current or former employees, contractors, or business partners intentionally exceeded or misused an authorized level of access to networks, systems, or data to steal confidential or proprietary information from the organization. This paper is centered on two dominant models found within the cases: the Entitled Independent Scenario (27 cases) and the Ambitious Leader Scenario (21 cases). We first define our approach to building these models. Next, we incrementally build the models, describing them as we go. Finally, we provide general observations and discuss future work. Appendix A summarizes important characteristics of the crimes involving theft of IP. Appendices B and C provide an overview of the models developed. We believe that these models will help people better understand the complex nature of this class of threat. Through improved understanding comes better awareness and intuition regarding the effectiveness of countermeasures against the crime. Our work generates strong hypotheses based on empirical evidence. Future work will involve alignment with existing theory, testing of these hypotheses based on random sampling from larger populations, and analysis of mitigation approaches.
To read the complete report, please visitwww.sei.cmu.edu/library/abstracts/reports/11tn013.cfm
Additional Resources:
For more information about the CERT program, please visit www.cert.org
To read the Insider Threat blog, please visitwww.cert.org/blogs/insider_threat/
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:28am</span>
|
|
By Nanette Brown, Senior Member of the Technical StaffResearch, Technology, and System Solutions program
Occasionally this blog will highlight different posts from the SEI blogosphere. Today’s post is from the SATURN Network blog by Nanette Brown, a senior member of the technical staff in the SEI’s Research, Technology, and System Solutions
program. This post, the third in a series
on lean principles and architecture, continues the discussion on the eight types of waste identified in Lean manufacturing and how these types of waste manifst themselves in software development. The focus of this post is on mapping the waste of motion and the waste of transportation from manufacturing to the waste of information transformation in software development.
To read more...
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:27am</span>
|
|
By Paul Clements, Senior Member of the Technical Staff Research, Technology, & System Solutions
Testing plays a critical role in the development of software-reliant systems. Even with the most diligent efforts of requirements engineers, designers, and programmers, faults inevitably occur. These faults are most commonly discovered and removed by testing the system and comparing what it does to what it is supposed to do. This blog posting summarizes a method that improves testing outcomes (including efficacy and cost) in a software-reliant system by using an architectural design approach, which describes a coherent set of architectural decisions taken by architects to help meet the behavioral and quality attribute requirements of systems being developed.
Developers of software-reliant systems must address several testing-related challenges. For example, testing is expensive and can account for more than 50 percent of a project’s schedule and budget, depending on the criticality of the system. Unfortunately, some organizations assign a budget for testing and stop when that budget is consumed. Safety-critical software organizations also have a budget, but they often must reach a confidence level independent of the expenditures to meet certification standards. In either situation, improving the efficacy and cost of testing is essential to meeting requirements and business goals.
Another challenge is that few organizations inform the testing process by considering the software architecture, which comprises the structure of the software elements in a system, the externally visible properties of those elements, and the relationships among them. Ignoring or overlooking the software architecture during the testing process is problematic because the structures that comprise the software architecture ensure the quality attributes and enable the system to meet its requirements and business goals.
To address the challenges outlined above, we have developed an architectural design approach to testing software-reliant systems. The foundation of this approach involves creating testability profiles that give testers an actionable description of a design approach’s effect on the testing practice. Each testability profile consists of four parts:
In the first part, testers conduct an initial analysis to determine that the architecture design approach (often expressed in the form of architecture styles and patterns) is actually used in the product or artifact that is being tested. This part of the testability profile defines the essential characteristics of the architecture design approach and describes how to recognize those characteristics in an artifact. Ideally, this step is accomplished by referring to specific views in the architecture documentation. Realistically, verifying the presence of the architecture design approach may require correlating information from various parts of the architecture documentation. Techniques such as design structure matrices (DSM) and architecture-level call graphs can be used to identify structural patterns in either an architecture description or implementation. Some DSM tools will parse source code, producing a matrix of dependencies that actually exist in the code.
The second part of a testability profile includes a fault model that consists of the following two subparts:
The first subpart describes the system or component and characterizes possible failures associated with the chosen architectural approach. It is possible to associate a fault model with a particular architecture design approach. For example, in a pipe and filter architecture, the pipes cannot change the order of value of the data in their data streams or communicate with other pipes. If they do, this is considered a fault, and, more importantly, it is considered a fault that is associated with a particular architectural style or pattern.
The second subpart enumerates the set of possible failures that this particular architecture design approach relieves the system from. For example, if the architect has selected a state machine design approach to encapsulate the control logic of a module, a subsystem, or the entire system, then (assuming an implementation that is demonstrably compliant to the architecture) no control logic errors are possible outside the state machine’s encapsulating component.
A third part examines the available analysis that corresponds to the fault model to determine if a particular analysis can be performed based on the architecture design approach and whether that analysis can tell conclusively whether a particular fault exists in the system. This part of the profile details any available tools and methods, such as the Architecture Analysis and Design Language (AADL) analytical toolset or model checkers, that can be used to draw conclusions about systems that are compliant with that architecture. The analysis may be architecture- or code-based.
The final part includes tests that have been made redundant or that can be de-prioritized as a result of the fault model and the analysis. If the first three parts are completed for an architecture that includes the architecture design approach, then it should render certain tests for corresponding faults unnecessary or de-emphasize the faults in the testing procedures. For example, if analysis can show that deadlock is impossible in architecture-compliant implementation, then it should be unnecessary to test for deadlock.
Testability profiles do not currently exist for architectural design approaches. Although pattern catalogs, such as the Pattern-Oriented Software Architecture (POSA) series and the "Gang of Four" book, are now common, there was a time when pattern descriptions were not widely available. If the cost and/or efficacy of testing can be substantially improved through the use of testability profiles, we might one day expect to see them documented alongside (or as part of) the pattern description of an approach.
To see how the testability profile and the architecture design approach fit together, suppose an engineer decided to build a service-oriented architecture (SOA) for a system. While that engineer has gained a lot of quality for the system, that system may now be susceptible to a class of faults specific to SOA. For example, the network may not send a service request the way that it should, or a particular service may not contain the quality attributes that are needed. After an architecture design approach testability profile is established, testers can decide whether to perform one or more of the following steps:
Check the implementation of an architecture design approach’s observables to verify, or at least gain confidence, that an architecture design approach is present.
Invoke the profile’s architecture-based analysis to determine whether the system contains architecture design approach-related faults.
Remove or de-emphasize from the test portfolio some or all of the test cases associated with faults ruled out by the architecture design approach’s fault model.
Remove or de-emphasize from the test portfolio some or all of the test cases associated with faults ruled out by analysis.
As a result of our research, testers will be able to determine the most important things to test for by illuminating new failure models that might not have been known before. Conversely, testers will also be able to determine failure models that they can safely assume will not occur. It is our hypothesis that this approach is broadly applicable to many types of systems. We are interested in working with organizations to pilot this approach, so if you would like us to consider your organization for our pilot program, please send an email to info@sei.cmu.edu.
Additional Resources: For more information about our research in architecture support for testing, please visit www.sei.cmu.edu/architecture/research/archpractices/Architecture-Support-for-Testing.cfm
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:27am</span>
|
|
by David White,Smart Grid Maturity Model Project Manager A reliable, secure energy supply is vital to our economy, our security, and our well being. A key component of achieving a reliable and secure energy supply is the "smart grid" initiative. This initiative is a modernization effort that employs distributed sensing and control technologies, advanced communication systems, and digital automation to enable the electric power grid to respond intelligently to fluctuations in energy supply and demand, the actions of consumers, and market forces with an overall objective to improve grid efficiency and reliability. A smart grid will also allow homeowners to track energy consumption and adjust their habits accordingly. This posting describes several initiatives that the SEI has taken to support power utility companies in their modernization efforts to create a smart grid.
As power utility companies consider modernizing their existing grids to create smart grids, they must develop effective roadmaps and track progress against these roadmaps. The Smart Grid Maturity Model (SGMM) is a framework that helps utilities plan smart grid implementation, prioritize options, and measure progress. With support from the Department of Energy’s Office of Electricity Delivery and Energy Reliability—along with input from a broad array of stakeholders—the SEI has helped formulate and host the SGMM as a resource for industry transformation. For example, the SEI trains industry experts to serve as SGMM Navigators, who work directly with electric utilities in support of their grid modernization efforts.
One of the primary roles of an SGMM Navigator is to lead a utility through the SGMM Compass Survey assessment tool, which collects performance data and evaluates characteristics of the utility’s smart grid progress. Compass results allow the utility to compare its progress with other utilities that have completed the survey. The results also provide the utility with a measure of its progress across the following eight domains of the model, which describe logical groupings of smart grid-related capabilities and characteristics:
Strategy, Management, and Regulatory (SMR) describes characteristics that enable the organization to align and operate to achieve its desired smart grid transformation.
Organization and Structure (OS) focuses on internal changes that are needed in culture, structure, training, communications, and knowledge management to achieve smart grid implementation.
Grid Operations (GO) describes characteristics that support the reliable, efficient, secure, safe, operation of the electrical grid. Many characteristics in this domain express the transition from manually-intensive operation of the grid to more automated operation.
Work and Asset Management (WAM) describes characteristics that optimize the management of grid assets and workforce resources. It’s about people and equipment that are central to meeting the smart grid goals. It includes characteristics about asset monitoring, tracking, and maintenance and issues related to supporting the mobile workforce.
Technology (TECH) describes the information technology (IT) architecture that supports smart grid implementation including the adoption and implementation of standards, infrastructure, and the integration of various technology tools across the utility to support smart grid transformation.
Customer (CUST) describes the characteristics that enable the customer’s participation toward achieving the benefits of the smart grid transformation with the utility. The CUST domain addresses issues associated with pricing, customer participation (both passive and active) and the customer’s experience through that participation. It also addresses issues associated with advanced services that the utility might make available using smart grid functionality to serve customers better.
Value Chain Integration (VCI) describes the characteristics that allow the utility to successfully manage the interdependencies with the supply chain for the production of electricity and the demand side for the delivery of electricity. Many of the smart grid features have to deal with supply and demand management. Also, in this domain we find issues associated with leveraging market opportunities through smart grid automation.
Societal and Environmental (SE) enables the utility to contribute to achieving societal goals regarding the reliability, safety, and security of our electric power infrastructure. This domain addresses both the quantity and sources of energy used, and the impact of the infrastructure and our energy use on the environment and quality of life. These issues are a major focus of many utilities that are beginning smart grid initiatives.
To complete the assessment, a SGMM Navigator leads a workshop with the utility’s operations team. After the survey is complete, the Navigator facilitates a second workshop to review the findings with the utility and use the SGMM to set strategic goals or aspirations for smart grid implementation. The Navigators have access to detailed process scripts, checklists, and templates created by the SGMM team to facilitate the Navigation process. The SEI uses data collected through the Compass survey to guide future improvements to the model and report on the status of the grid modernization reflected by the community of SGMM users.
As of August 2011, more than 120 utilities around the world have used the SGMM as a management tool to help modernize the electric power grid and enable important advances in energy efficiency, reliability, and security. Some utilities have applied the model to regional and national roadmapping. In the summer of 2010, the SGMM team worked directly with the Comision Federal de Electricidad (CFE), one of largest utilities in the world with 33.9 million customers, and the Mexican Energy Ministry, Secretaria de Energia de Mexico (SENER), on the first use of the SGMM at the national level to assist in developing a national smart grid roadmap.
The SGMM Navigator program is the latest tool from the SEI’s Smart Grid Team, which has served as the steward of the model since 2009. In October 2010, the SEI published Version 1.1 of the Smart Grid Maturity Model, after piloting the model revisions with more than 30 small and large utilities. Changes in SGMM V1.1 include enhanced security coverage, a more refined architecture, and a more developed model, which now includes 175 characteristics. The model is part of a full product suite, which includes training, the Navigation process, and Compass, to support utilities in using SGMM to develop a roadmap for their smart grid transformation.
The SEI finalized SGMM version 1.2 (released on September 12, 2011), which includes revisions and additions to the organizational attributes and performance information collected through the Compass survey. The new data collected will support future research efforts on the effectiveness of smart grid implementation by providing a basis for performing correlation studies and other statistical tests on performance measurements and SGMM maturity profiles. This research may reveal patterns in SGMM maturity profiles that correlate positively to performance improvements. These correlations could suggest effective smart grid implementation patterns (as measured by SGMM) or may simply suggest areas for additional study.
Increasing the use of the Compass survey will support SEI research efforts by providing a larger data set for study. To support expanded use of the model by utilities, the SEI has created an opportunity for outside organizations to become SEI Partners and deliver SGMM Navigation services. To date, seven organizations have become SEI Partners for SGMM: Ebiz Labs, Horizon Energy Group, Infotech Enterprises America, IBM, and SAIC’s RW Beck, TCS America, and Wipro and more than 30 industry experts from these organizations have been trained in leading the SGMM Navigation process. This pool of SGMM Navigators will help utilities succeed with their smart grid transformations and will provide a steady stream of data back to the SEI to support future research and analysis efforts.
Additional Resources:
For more information about the Smart Grid Maturity Model, please visit www.sei.cmu.edu/smartgrid/
To view a webinar on the Smart Grid Maturity Model, please visit www.sei.cmu.edu/library/abstracts/webinars/Empower-Your-Smart-Grid-Transformation.cfm
Most documents in the Smart Grid Maturity Model are available for download. For more information, www.sei.cmu.edu/smartgrid/start/downloads/
Three podcasts on the Smart Grid Maturity Model can be viewed atwww.cert.org/podcast/show/20110505white.htmlwww.cert.org/podcast/show/20100112jones.htmlwww.cert.org/podcast/show/20090929stevens.html
A list of current SGMM Partnerswww.sei.cmu.edu/partners/sgmm/
Information for individuals interested in becoming SGMM Navigatorswww.sei.cmu.edu/certification/sgmm/navigator/Information for organizations interested in becoming Partners for SGMMwww.sei.cmu.edu/partners/become/sgmm/
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:25am</span>
|







