Blogs
|
By Douglas C. SchmidtPrincipal Researcher
As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in malware analysis, acquisition strategies, network situational awareness, resilience management (with three reports from this research area), incident management, and future architectures. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.
Using Malware Analysis to Tailor SQUARE for Mobile PlatformsBy Gregory Paul Alice and Nancy R. Mead
As the number of mobile-device software applications has grown, so has the amount of malware targeting them. More than 650,000 pieces of malware now target the Android platform. As mobile malware becomes more sophisticated and begins to approach threat levels seen on PC platforms, software development security practices for mobile applications will need to adopt the security practices for PC applications to reduce consumers’ exposure to financial and privacy breaches on mobile platforms. This technical note explores the development of security requirements for the K-9 Mail application, an open source email client for the Android operating system. The project’s case study (1) used the Security Quality Requirements Engineering (SQUARE) methodology to develop K-9 Mail’s security requirements and (2) used malware analysis to identify new security requirements in a proposed extension to the SQUARE process. This second task analyzed the impacts of DroidCleaner, a piece of Android malware, on the security goals of the K-9 Mail application. Based on the findings, new requirements are created to ensure that similar malware cannot compromise the privacy and confidentiality of email contents.Download a PDF of the report.
A Method for Aligning Acquisition Strategies and Software ArchitecturesBy Lisa Brownsword, Cecilia Albert, David J. Carney, & Patrick R. Place
In the acquisition of a software-intensive system, the relationship between the software architecture and the acquisition strategy is typically not carefully examined. To remedy this lack, a research team at the SEI has focused a multiyear effort to discover an initial set of failure patterns that result when these entities become misaligned and identify a set of desired relationships among the business and mission goals, system and software architectures, and the acquisition strategy. This report describes the result of the third year of the SEI’s research, where the team defined a method that indicates such areas of misalignment (i.e., between a program’s architecture and acquisition strategy). The alignment method is used as early in a program’s lifetime as practical, ideally before the architecture or acquisition strategy has attained full definition. The authors illustrate the method by means of a case study, during which many of the key elements of the method were piloted.Download a PDF of the report.
Smart Collection and Storage Method for Network Traffic DataBy Angela Horneman & Nathan Dell
Captured network data enables an organization to perform routine tasks such as network situational awareness and incident response to security alerts. The process of capturing, storing, and evaluating network traffic as part of monitoring is an increasingly complex and critical problem. With high-speed networks and ever-increasing network traffic volumes, full-packet traffic capture solutions can require petabytes of storage for a single day. The capacity needed to store full-packet captures for a time frame that permits the needed analysis is unattainable for many organizations. A tiered network storage solution, which stores only the most critical or effective types of traffic in full-packet captures and the rest as summary data, can help organizations mitigate the storage issues while providing the detailed information they need. This report discusses considerations and decisions to be made when designing a tiered network data storage solution. It includes a method, based on a cost-effectiveness model, that can help organizations decide what types of network traffic to store at each storage tier. The report also uses real-world network measurements to show how storage requirements change based on what traffic is stored in which storage tier.Download a PDF of the report.
CERT Resilience Management Model—Mail-Specific Process Areas: Mail Induction (Version 1.0)By Julia H. Allen , Greg Crabb (U.S. Postal Inspection Service) , Pamela D. Curtis , Nader Mehravari, & David W. White
Developing and implementing measurable methodologies for improving the security and resilience of a national postal sector directly contribute to protecting public and postal personnel, assets, and revenues. Such methodologies also contribute to the security and resilience of the mode of transport used to carry mail and the protection of the global mail supply chain. Since 2011, the U.S. Postal Inspection Service (USPIS) has collaborated with the SEI’s CERT Division to improve the resilience of selected U.S. Postal Service (USPS) products and services. The CERT Resilience Management Model (CERT-RMM) and its companion diagnostic methods served as the foundational tool for this collaboration.
This report includes one result of the USPIS/CERT collaboration. It is an extension of CERT-RMM to include a new mail-specific process area for the induction (acceptance) of mail into the U.S. domestic mail stream. The purpose is to ensure that mail is collected and accepted in accordance with USPS standards and requirements for the resilience of mail during the induction process.Download a PDF of the report.
CERT Resilience Management Model—Mail-Specific Process Areas: Mail Revenue Assurance (Version 1.0)By Julia H. Allen , Greg Crabb (U.S. Postal Inspection Service) , Pamela D. Curtis , Nader Mehravari , & David W. White
Developing and implementing measurable methodologies for improving the security and resilience of a national postal sector directly contribute to protecting public and postal personnel, assets, and revenues. Such methodologies also contribute to the security and resilience of the mode of transport used to carry mail and the protection of the global mail supply chain. Since 2011, the U.S. Postal Inspection Service (USPIS) has collaborated with the SEI’s CERT Division to improve the resilience of selected U.S. Postal Service (USPS) products and services. The CERT Resilience Management Model (CERT-RMM) and its companion diagnostic methods served as the foundational tool for this collaboration.
This report includes one result of the USPIS/CERT collaboration. It is an extension of CERT-RMM to include a new mail-specific process area for revenue assurance. The purpose is to ensure that the USPS is compensated for all mail that is accepted, transported, and delivered.Download a PDF of the report.
CERT Resilience Management Model—Mail-Specific Process Areas: International Mail Transportation (Version 1.0)By Julia H. Allen , Greg Crabb (U.S. Postal Inspection Service) , Pamela D. Curtis , Sam Lin, Nader Mehravari , & Dawn Wilkes
Developing and implementing measurable methodologies for improving the security and resilience of a national postal sector directly contribute to protecting public and postal personnel, assets, and revenues. Such methodologies also contribute to the security and resilience of the mode of transport used to carry mail and the protection of the global mail supply chain. Since 2011, the U.S. Postal Inspection Service (USPIS) has collaborated with the SEI’s CERT Division to improve the resilience of selected U.S. Postal Service (USPS) products and services. The CERT Resilience Management Model (CERT-RMM) and its companion diagnostic methods served as the foundational tool for this collaboration.
This report includes one result of the USPIS/CERT collaboration. It is an extension of CERT-RMM to include a new mail-specific process area for the transportation of international mail. The purpose is to ensure that all international mail is transported in accordance with the standards established by the Universal Postal Union (UPU), which is the governing body that regulates the transportation of international mail.Download a PDF of the report.
A Systematic Approach for Assessing Workforce ReadinessBy Christopher J. Alberts & David McIntire
Workforce effectiveness relies on two critical characteristics: competence and readiness. Competence is the sufficient mastery of the knowledge, skills, and abilities needed to perform a given task. It reflects how well an individual understands subject matter or is able to apply a given skill. Readiness is the ability to apply the total set of competencies required to perform a job task in a real-world environment with acceptable proficiency. A readiness test assesses an individual’s ability to apply a group of technical and core competencies needed to perform and excel at a job task. This report describes research into workforce readiness conducted by the Computer Security Incident Response Team (CSIRT) Development and Training team in the SEI’s CERT Division. This report presents the Competency Lifecycle Roadmap (CLR), a conceptual framework for establishing and maintaining workforce readiness within an organization. It also describes the readiness test development method, which defines a structured, systematic approach for constructing and piloting readiness tests. Finally, the report illustrates the initial application of the readiness test development method to the role of forensic analyst.Download a PDF of the report.
Patterns and Practices for Future ArchitecturesBy Eric Werner, Scott McMillan, & Jonathan Chu
Graph algorithms are widely used in Department of Defense (DoD) applications including intelligence analysis, autonomous systems, cyber intelligence and security, and logistics optimization. These analytics must execute at larger scales and higher rates to accommodate the growing velocity, volume, and variety of data sources. The implementations of these algorithms that achieve the highest levels of performance are complex and intimately tied to the underlying architecture. New and emerging computing architectures require new and different implementations of these well-known graph algorithms, yet it is increasingly expensive and difficult for developers to implement algorithms that fully leverage their capabilities. This project investigates approaches that will make high-performance graph analytics on new and emerging architectures more accessible to users. The project is researching the best practices, patterns, and abstractions that will enable the development of a software graph library that separates the concerns of expressing graph algorithms from the details of the underlying computing architectures. The approach started with a fundamental graph analytics function: the breadth-first search (BFS). This technical note compares different BFS algorithms for central and graphics processing units, examining the abstractions used and comparing the complexity of the implementations against the performance achieved.Download a PDF of the report.
Additional Resources
For the latest SEI technical reports and notes, please visit http://resources.sei.cmu.edu/library/.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 09:03am</span>
|
|
By Douglas C. SchmidtPrincipal Researcher
As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in software assurance, social networking tools, insider threat, and the Security Engineering Risk Analysis Framework (SERA). This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.
Predicting Software Assurance Using Quality and Reliability MeasuresBy Carol Woody, Robert J. Ellison, William Nichols
Security vulnerabilities are defects that enable an external party to compromise a system. Our research indicates that improving software quality by reducing the number of errors also reduces the number of vulnerabilities and hence improves software security. Some portion of security vulnerabilities (maybe over half of them) are also quality defects. This report includes security analysis based on data the SEI has collected over many years for 100 software development projects. Can quality defect models that predict quality results be applied to security to predict security results? Simple defect models focus on an enumeration of development errors after they have occurred and do not relate directly to operational security vulnerabilities, except when the cause is quality related. This report discusses how a combination of software development and quality techniques can improve software security.
Download a PDF of the report.
Regional Use of Social Networking ToolsBy Kate Meeuf
Social networking services (SNSs) empower users to communicate, connect, and engage with others across the Internet. These tools have exploded in use worldwide. This paper explores the regional use of these tools to determine if participation with a subset of SNSs can be applied to identify a user’s country of origin. A better understanding of regional SNS behavior provides a more comprehensive profile of country-specific users, supporting computer network defense (CND) efforts and computer network attacks (CNA) attribution. The conclusions are as follows:
Existing open source reporting yields an understanding of the market penetration of social networking tools for various regions and countries.
Preferences for social networking tools have become somewhat universal. Irrespective of location, users are gravitating towards the same tools.
The native social networking services of countries in Northern Asia and Eastern Europe have remained relevant. These tools can be leveraged as discriminators to resolve a user’s location.
Reporting provided evidence to suggest that mobile devices influence SNS selection and promote social networking adoption.
Cultural factors provide insights into the regional usage of social networking tools, but additional research and quantitative analysis are required to add fidelity to the employment of cultural indicators in deriving a user’s country of origin.
Which social networking tools are used is only part of the equation when resolving a user’s location. Other variables should be incorporated to create an informed assessment of the social media output’s geographic origin.
Download a PDF of the report.
Pattern-Based Design of Insider Threat ProgramsBy Andrew P. Moore, Matthew L. Collins, Dave Mundie, Robin Ruefle, and David McIntire
Despite the high impact of insider attacks, organizations struggle to implement effective insider threat programs. In addition to the mandate for all Department of Defense (DoD) and U.S. Government agencies to build such programs, approval of updates to the National Industrial Security Program Operating Manual regarding insider threat defense require thousands of contractors to have insider threat programs as part of their security defense. Unfortunately, according to the Insider Threat Task Force of the Intelligence and National Security Alliance (INSA) Cyber Council, many such organizations have no insider threat program in place, and most of the organizations that do have serious deficiencies. This report describes a pattern-based approach to designing insider threat programs that could, if further developed, provide a more systematic, targeted way of improving insider threat defense.
Download a PDF of the report.
Introduction to the Security Engineering Risk Analysis (SERA) FrameworkBy Christopher J. Alberts, Carol Woody, Audrey J. Dorofee
Software is a growing component of modern business- and mission-critical systems. As organizations become more dependent on software, security-related risks to their organizational missions are also increasing. Traditional security-engineering approaches rely on addressing security risks during the operation and maintenance of software-reliant systems. However, the costs required to control security risks increase significantly when organizations wait until systems are deployed to address those risks. It is more cost effective to address software security risks as early in the lifecycle as possible. As a result, researchers from the CERT Division of the Software Engineering Institute (SEI) have started investigating early lifecycle security risk analysis (i.e., during requirements, architecture, and design). This report introduces the Security Engineering Risk Analysis (SERA) Framework, a model-based approach for analyzing complex security risks in software-reliant systems and systems of systems early in the lifecycle. The framework integrates system and software engineering with operational security by requiring engineers to analyze operational security risks as software-reliant systems are acquired and developed. Initial research activities have focused on specifying security requirements for these systems. This report describes the SERA Framework and provides examples of pilot results.
Download a PDF of the report.
Additional Resources
For the latest SEI technical reports and notes, please visit http://resources.sei.cmu.edu/library/.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 08:59am</span>
|
|
By Douglas C. SchmidtPrincipal Researcher
As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in resilience, metrics, sustainment, and software assurance. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.
A Proven Method for Meeting Export Control Objectives in Postal and Shipping Sectors By Greg Crabb, Julia H. Allen, Pamela D. Curtis, Nader Mehravari
On a weekly basis, the U.S. Postal Service (USPS) processes over one million packages destined to overseas locations. All international shipments being sent from the United States are subject to federal export laws. The USPS has extensive export compliance policies and screening procedures to ensure that customers comply with federal export laws. Compliance policies and screening procedures are expensive and time consuming and can negatively affect the efficiency of international mail delivery services. The U.S. Postal Inspection Service (USPIS) has defined, developed, and successfully implemented an innovative approach for export screening that has drastically improved its efficiency, effectiveness, and accuracy. Having benefited from using concepts of operational resilience management to improve the security and resilience of USPS products and services, the USPIS team conducted its new export screening project using a structured and repeatable approach based on the CERT Resilience Management Model (CERT-RMM) developed by the SEI.
This report describes how CERT-RMM enabled the USPIS to implement an innovative approach for achieving complex international mail export control objectives. The authors also discuss how this USPIS application of CERT-RMM might be equally applicable to other shipping and transportation sectors that are tasked with meeting export control objectives. Download a PDF of the Report.
Measuring What Matters Workshop ReportBy Katie C. Stewart, Julia H. Allen, Michelle A. Valdez, Lisa R. Young
This report describes the inaugural Measuring What Matters Workshop conducted in November 2014 and the team’s experiences in planning and executing the workshop and identifying improvements for future offerings. The Measuring What Matters Workshop introduces the Goal-Question-Indicator-Metric (GQIM) approach that enables users to derive meaningful metrics for managing cybersecurity risks from strategic and business objectives. This approach helps ensure that organizational leaders have better information to make decisions, take action, and change behaviors.Download a PDF of the Report.
A Dynamic Model of Sustainment InvestmentBy Sarah Sheard, Robert Ferguson, Andrew P. Moore, Mike PhillipsThis paper describes a dynamic sustainment model that shows how budgeting, allocation of resources, mission performance, and strategic planning are interrelated and how they affect each other over time. Each of these processes is owned by a different stakeholder, so a decision made by one stakeholder might affect performance in a different organization. Worse, delaying a decision to fund some work might result in much longer delays and much greater costs to several of the organizations.The SEI developed and calibrated a systems dynamic model that shows interactions of various stakeholders over time and the results of four realistic scenarios. The current model has been calibrated with data from the F/A-18 and EA-18G Advanced Weapons Lab (AWL) at China Lake, CA.The model makes it possible for a decision maker to study different decision scenarios and interpret the likely effects on other stakeholders in acquisition. In a scenario where sustainment infrastructure investment is shortchanged over a period of time, the tipping point phenomenon is shown in the results of the calibrated model. Download a PDF of the Report.
Predicting Software Assurance Using Quality and Reliability MeasuresBy Carol Woody, Robert J. Ellison, William NicholsSecurity vulnerabilities are defects that enable an external party to compromise a system. Our research indicates that improving software quality by reducing the number of errors also reduces the number of vulnerabilities and hence improves software security. Some portion of security vulnerabilities (maybe over half of them) are also quality defects. This report includes security analysis based on data the Software Engineering Institute (SEI) has collected over many years for 100 software development projects. Can quality defect models that predict quality results be applied to security to predict security results? Simple defect models focus on an enumeration of development errors after they have occurred and do not relate directly to operational security vulnerabilities, except when the cause is quality related. This report discusses how a combination of software development and quality techniques can improve software security.Download a PDF of the Report.
Additional ResourcesFor the latest SEI technical reports and notes, please visit http://resources.sei.cmu.edu/library/.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 08:58am</span>
|
|
By Donald Firesmith Principal EngineerSoftware Solutions Division
One of the most important and widely discussed trends within the software testing community is shift left testing, which simply means beginning testing as early as practical in the lifecycle. What is less widely known, both inside and outside the testing community, is that testers can employ four fundamentally-different approaches to shift testing to the left. Unfortunately, different people commonly use the generic term shift left to mean different approaches, which can lead to serious misunderstandings. This blog post explains the importance of shift left testing and defines each of these four approaches using variants of the classic V model to illustrate them.
The Consequences of Testing Late in the Lifecycle
For decades, it has been well known that defects are more difficult and expensive to fix the later they are found in the lifecycle. This phenomena is one reason why treating testing as a sequential phase at the end of waterfall development has long been viewed as a major pitfall of system and software testing. Examples of the harm caused by postponing testing include
Testers may be less involved in initial planning, often resulting in insufficient resources being allocated to testing.
Many requirements, architecture, and design defects are not uncovered and fixed until after significant effort has been wasted on their implementation.
Debugging (including identifying, localizing, fixing, and regression testing defects) becomes harder as more software is produced and integrated.
Encapsulation makes it harder to perform whitebox testing and to achieve high levels of code coverage during testing.
There is less time to fix defects found by testing, thereby increasing the likelihood that they will be postponed until later increments or versions of the system, which creates a "bow wave" of technical debt that can sink projects if it grows too large.
These negative consequences of late testing increase development and maintenance costs, lead to missed deadlines and schedule delays, decrease quality due to residual defects, and generally lower project morale and job satisfaction.
Testing
Because the term testing means different things to different people, we need to ensure a common understanding of the word before discussing the four different types of shift left testing. Traditionally, people have decomposed verification into four different methods: analysis, demonstration, inspection, and testing. For many people, however, testing has become synonymous with quality assurance and the word has been expanded to include all four verification methods. Here and in my book, Common System and Software Testing Pitfalls: How to Prevent and Mitigate Them: Descriptions, Symptoms, Consequences, Causes, and Recommendations, I restrict the term testing to its traditional meaning:
placing an executable into a known pretest state in a known pre-test environment and stimulating it with known stimuli and then verifying whether the executable’s resulting behavior and postconditions match those predicted by the test oracle (e.g., requirements or user expectations, architecture, or design).
In other words, a test is essentially an experiment to determine whether something (typically software or a system) executes correctly. Shift left testing does not mean the use of non-testing methods such as inspections (including audits, desk checking, Fagan-inspections, reviews, and walkthroughs), analyses (including static analysis of software) because we have always used these methods early in the development cycle. When developers and testers wish to include the other verification methods, they should use the terms shift left quality assurance or shift left verification.
Testing and the V Model
The classic V-model is a traditional way of graphically representing software engineering activities. As shown in Figure 1, the left side of the V represents requirements, architecture, design, and implementation whereas the right side represents integration and testing. Note that the word "system" in Figure 1 could mean either a pure software application or a system (or subsystem) consisting of software, hardware, and potentially other types of components such as data, documents, people, facilities, and devices.
Figure 1: Traditional V Model
The V-model has come under a great deal of significant - and justified - criticism in recent decades because it can easily be interpreted to imply a strict sequential waterfall development cycle that is inconsistent with a modern evolutionary (that is, incremental, iterative, concurrent, and time-boxed) development in projects that use Agile or DevOps approaches. The V model can also be interpreted as prohibiting a test-first approach or test-driven design (TDD). On the other hand, as long as system and software engineers remember its limitations and view it merely as notional and only showing logical relationships between development activities, the V model provides a succinct way to illustrate our descriptions of approaches to shift left testing. With this in mind, shift left essentially means moving testing to the left on the V.
The Four Methods of Shifting Testing to the Left
There are four basic ways to shift testing earlier in the lifecycle (that is, leftward on the V model). These can be referred to as traditional shift left testing, incremental shift left testing, Agile/DevOps shift left testing, and model-based shift left testing.
Traditional Shift Left Testing. As shown in Figure 2 below, traditional shift left moves the emphasis of testing lower down (and therefore slightly to the left) on the right hand side of the V. Instead of emphasizing acceptance and system level testing (e.g., UI testing with record and playback tools), traditional shift left concentrates on unit and integration testing (e.g., using API testing and modern test tools). The transition to traditional shift left testing has largely been completed.
Figure 2: Traditional Shift Left Testing
Incremental Shift Left Testing. Few modern software-reliant projects use a strict waterfall development cycle. As shown in Figure 3 below, many projects developing large and complex software-reliant systems decompose development into a small number of increments (Vs) having correspondingly shorter durations. The shift left illustrated by the dashed red arrows occurs because parts of the single, large waterfall V model’s types of testing (shown in gray) are shifted left to become increments of the corresponding types of testing in the smaller incremental V models. When each increment is also a delivery to the customer and operations, then incremental shift left testing shifts both developmental testing and operational testing to the left. Incremental shift left testing is popular when developing large, complex systems, especially those incorporating significant amounts of hardware. Like traditional shift left, the transition to incremental shift left has also been largely completed.
Figure 3: Incremental Shift Left Testing
Agile/DevOps Shift Left Testing. As shown in Figure 4, Agile and DevOps projects have numerous short duration Vs (a.k.a., sprints) in lieu of a single or small number of V as in the previous two examples of shift left testing. These small Vs would also be modified if one or more early sprints are used to block out the basic requirements and architecture or if test-first and test-driven development (TDD) are being performed. The shift left occurs because the types of testing on the right sides of the earliest of these tiny Vs are to the left of the corresponding types of testing on right side of the larger V(s) they replace. While Figure 4 below appears remarkably the same for Agile and DevOps, Agile testing is typically restricted to developmental testing and does not include operational testing, which occurs once the system is placed into operation. The transition to Agile/DevOps shift left testing is currently popular and ongoing.
Figure 4: Agile/DevOps Shift Left
Model-Based Shift Left Testing. The previous forms of shifting testing left all concentrated on beginning the testing of software earlier in the development cycle. Waiting until software exists to begin testing, however, largely and unnecessarily limits the use of testing to uncovering coding defects. This delay is particularly disturbing because from 45 percent to 65 percent of defects are introduced in the requirements, architecture, and design activities.
As shown in Figure 5 below, model testing moves testing to the left side of the Vs by testing executable requirements, architecture, and design models. This shift enables testing to begin almost immediately, instead of waiting a long time (traditional), medium time (incremental), or a short time (Agile/DevOps) until software on the right side of the Vs is available to test. This trend is just beginning and will become increasingly popular as executable models and associated simulation/testing tools become more widely available.
Figure 5: Model-Based Shift Left Testing
These four different types of shift left testing form a historical progression with each one building on the ones that preceded it:
Traditional shift left testing corrects an overemphasis on UI-based system and acceptance testing.
Incremental shift left testing introduces incremental testing via an incremental development cycle.
Agile/DevOps shift left testing introduces continuous testing (CT) via an evolutionary lifecycle composed of many short duration sprints.
Model-Based shift left testing introduces the testing of executable requirements, architecture, and design models.
Conclusion
Shift left testing is a powerful and well-known trend within the software testing community that is largely intended to better integrate testing into the system/software engineering and thereby uncover defects earlier when they are easier and less expensive to fix. What is less well known, both within and outside of the testing community, is that there are four different ways to shift testing left and that these methods build upon each other to greatly improve the efficiency, effectiveness, and even the scope of testing.
Each of these four variants of shift left testing brings advantages, not least of which is finding defects earlier when they are easier and cheaper to fix. Each new shift left method has proved more powerful than the previous one, so it is reasonable to suspect that the model-based shift left testing may well be the most powerful of all because it helps to uncover requirements, architecture, and design defects before they are even implemented in software. Not surprisingly, the SEI is investigating the testing of executable architecture models developed using the Architecture Analysis and Design Language (AADL).
Additional Resources
To read Don Firesmith’s blog post, Using V Models for Testing, please visithttp://blog.sei.cmu.edu/post.cfm/using-v-models-testing-315.
To read more about the Testing at the End anti-pattern, please see his most recent book, Common System and Software Testing Pitfalls: How to Prevent and Mitigate Them: Descriptions, Symptoms, Consequences, Causes, and Recommendations, which was published as part of the SEI Series in Software Engineering.
To learn more about a framework for incremental life-cycle assurance of mission and safety critical system certification, please visit the following linkhttp://blog.sei.cmu.edu/post.cfm/improving-safety-critical-systems-with-a-reliability-validation-improvement-framework.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 08:57am</span>
|
|
By Douglas C. SchmidtPrincipal Researcher
As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in governing operational resilience, model-driven engineering, software quality, Android app analysis, software architecture, and emerging technologies. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.
Defining a Maturity Scale for Governing Operational ResilienceBy Katie C. Stewart, Julia H. Allen, Audrey J. Dorofee, Michelle A. Valdez, Lisa R. Young
Achieving operational resilience in today’s environment is becoming increasingly complex as the pace of technology and innovation continues to accelerate. Sponsorship, strategic planning, and oversight of operational resilience are the most crucial activities in developing and implementing an effective operational resilience management (ORM) system. These governance activities are described in detail in the CERT® Resilience Management Model enterprise focus (EF) process area (PA). To ensure operational resilience, an organization must identify shortfalls across these defined activities, make incremental improvements, and measure improvement against a defined, accepted maturity scale. The current version of the CERT Resilience Management Model (CERT-RMM V1.2) utilizes a maturity architecture (levels and descriptions) that may not meet the granularity needs for organizations committed to making incremental improvements in governing operational resilience. To achieve a more granular approach, the CERT-RMM Maturity Indicator Level (MIL) scale was developed for application across all CERT-RMM PAs.
The CERT Division of the SEI is conducting ongoing research around the current state of the practice of governing operational resilience and developing specific actionable steps for improving the governance of operational resilience. Study results provide the specific EF PA MIL scale for assessing maturity, identifying incremental improvements, and measuring improvements. Download a PDF of the Report
Model-Driven Engineering: Automatic Code Generation and Beyond By John Klein, Harry L. Levinson, Jay Marchetti
Increasing consideration of model-driven engineering (MDE) tools for software development efforts means that acquisition executives must more often deal with the following challenge: Vendors claim that by using MDE tools, they can generate software code automatically and achieve high developer productivity. However, MDE consists of more than code generation tools; it is also a software engineering approach that can affect the entire lifecycle of a system from requirements gathering through sustainment. This report focuses on the application of MDE tools for automatic code generation when acquiring systems built using these software development tools and processes. The report defines some terminology used by MDE tools and methods, emphasizing that MDE consists of both tools and methods that must align with overall acquisition strategy. Next, it discusses how the use of MDE for automatic code generation affects acquisition strategy and introduces new risks to the program. It then offers guidance on selecting, analyzing, and evaluating MDE tools in the context of risks to an organization's acquisition effort through-out the system lifecycle. Appendices provide a questionnaire that an organization can use to gather information about vendor tools along with criteria for evaluating tools mapped to the questionnaire that relate to acquisition concerns.
A supplementary file also available through the spreadsheet link is the Questionnaire Template. It contains the questionnaire used in this study and is available for download and use to collect information from vendors for your own model-driven engineering tool assessments. Download a PDF of the Report
Improving Quality Using Architecture Fault Analysis with Confidence ArgumentsBy Peter H. Feiler, Charles B. Weinstock, John B. Goodenough, Julien Delange, Ari Z. Klein, Neil Ernst (University of British Columbia)
This case study shows how an analytical architecture fault-modeling approach can be combined with confidence arguments to diagnose a time-sensitive design error in a control system and to provide evidence that proposed changes to the system address the problem. The analytical approach, based on the SAE Architecture Analysis and Design Language (AADL) for its well-defined timing and fault behavior semantics, demonstrates that such hard-to-test errors can be discovered and corrected early in the lifecycle, thereby reducing rework cost. The case study shows that by combining the analytical approach with confidence maps, we can present a structured argument that system requirements have been met and problems in the design have been addressed adequately—increasing our confidence in the system quality. The case study analyzes an aircraft engine control system that manages fuel flow with a stepper motor. The original design was developed and verified in a commercial model-based development environment without discovering the potential for missed step commanding. During system tests, actual fuel flow did not correspond to the desired fuel flow under certain circumstances. The problem was traced to missed execution of commanded steps due to variation in execution time.Download a PDF of the Report
Making DidFail Succeed: Enhancing the CERT Static Taint Analyzer for Android App SetsBy Jonathan Burket, Lori Flynn, Will Klieber, Jonathan Lim, Wei Shen, William Snavely
This report describes recent significant enhancements to DidFail (Droid Intent Data Flow Analysis for Information Leakage), the CERT static taint analyzer for sets of Android apps. In addition to improving the analyzer itself, the enhancements include a new testing framework, new test apps, and test results. A framework for testing the DidFail analyzer, including a setup for cloud-based testing was developed and instrumented to measure performance. Cloud-based testing enables the parallel use of powerful, commercially available virtual machines to speed up testing. DidFail was also modified to use the most current version of FlowDroid and Soot, increasing its success rate from 18 percent to 68 percent on our test set of real-world apps. Analytical features were added for more types of components and shared static fields and new apps developed to test these features. The improved DidFail analyzer and the cloud-based testing framework were used to test the new apps and additional apps from the Google Play store.Download a PDF of the Report
Eliminative Argumentation: A Basis for Arguing Confidence in System PropertiesBy John B. Goodenough, Charles B. Weinstock, Ari Z. Klein
Assurance cases provide a structured method of explaining why a system has some desired property, for example, that the system is safe. But there is no agreed approach for explaining what degree of confidence one should have in the conclusions of such a case. This report defines a new concept, eliminative argumentation, that provides a philosophically grounded basis for assessing how much confidence one should have in an assurance case argument. This report will be of interest mainly to those familiar with assurance case concepts and who want to know why one argument rather than another provides more confidence in a claim. The report is also potentially of value to those interested more generally in argumentation theory.Download a PDF of the Report
Emerging Technology Domains Risk SurveyBy Christopher King, Jonathan Chu, Andrew O. Mellinger
In today’s increasingly interconnected world, the information security community must be prepared to address emerging vulnerabilities that may arise from new technology domains. Understanding trends and emerging technologies can help information security professionals, leaders of organizations, and others interested in information security to anticipate and prepare for such vulnerabilities.This report, originally prepared in 2014 for the Department of Homeland Security United States Computer Emergency Readiness Team (US-CERT), provides a snapshot in time of the current understanding of future technologies. Each year, this report will be updated to include new estimates of adoption timelines, new technologies, and adjustments to the potential security impact of each domain. This report will also help US-CERT to make an informed decision about the best areas to focus resources for identifying new vulnerabilities, promoting good security practices, and increasing understanding of systemic vulnerability risk. Download a PDF of the Report
Additional Resources
For the latest SEI technical reports and notes, please visit http://resources.sei.cmu.edu/library/.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 08:56am</span>
|
|
By Todd WaitsProject Lead CERT Cyber Security Solutions Directorate
This post is the latest installment in a series aimed at helping organizations adopt DevOps.
At a recent workshop we hosted, a participant asked why the release frequency was so high in a DevOps environment. When working with significant legacy applications, release may be a once-in-a-year type event, and the prospect of releasing more frequently sends the engineering teams running for the hills. More frequent releases are made possible by properly implementing risk mitigation processes, including automated testing and deployment. With these processes in place, all stakeholders can be confident that frequent releases will be successful.
When testing our software builds, trust becomes a numbers game. What level of certainty do we have that an application will build and deploy successfully if we have only tested the whole process twice? What level of certainty do we have that an application will build and deploy successfully if we have tested that process 600 times? If my application passed 595 of the last 600 tests (with adequate coverage) and deployed to production-like environments, we can be more certain of successful performance than if it has passed one of only two tests.
The frequency that is important in DevOps is not the release-to-production frequency, but the frequency of releasing to production-like environments. Ensuring environment parity through the use of infrastructure as code allows teams to test their deployments consistently and continuously. This frequent testing increases the stakeholders’ confidence that the software will deploy reliably. More frequent releases without testing frameworks and automation in place will introduce more chaos into the release cycle, and ultimately more vulnerabilities and problems into production.
Testing should actually test what is critical to the application. Planning, communication, and code reviews help all teams involved identify what testing should be in place to properly mitigate critical risks to the business goals of the application. Mitigating risk gives all team members the freedom and confidence to explore and innovate without fear that they will deploy features that detract from the business value of the application.
DevOps allows for failure in the correct phase. For example, developers, by the nature of their jobs, will introduce instability, and if the instability breaks the application, the developers should be notified immediately. If that instability is instantly pushed to production, then DevOps principles are not being followed. The longer a problem exists before it is fixed increases the cost to fix it. Testing frequently and automatically across all phases of development is vital.
If you cannot fully test the application after every build, test only the parts that are important. Tests can be added as needs are identified by the engineering teams and stakeholders involved. The frequency of the testing will depend on the types of tests that are needed and the resources required to run the tests. In some cases, it is better to make the application based on microservices rather than monolithic. By so doing, you enable testing of the small chunks being changed and the deployment only of those chunks successfully changed, thereby reducing your risk profile.
If an application takes three hours to build, it is probably not feasible to run a full suite of tests on it after every developer commits. Perhaps, however, there is a subset of tests that is run after every commit, and the full test suite can run at night. If fuzz testing is important to the application, but it takes three days to test the application, then run fuzz testing every three days. All these processes should be automated so the engineers (whether the quality assurance team, developers, or operations) can focus on other tasks.
In some environments, more frequent releases to customers are not feasible. Whether release cycles are measured in years or hours, frequent testing, especially testing deployment to production-like environments, increases stakeholder confidence that business value will be delivered. Even if the release is two years down the road, the releases have been tested in an automated fashion on a daily, weekly, or monthly timeframe throughout the lifetime of the project.
Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
Additional Resources
To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits, please click here.
To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here.
To listen to the podcast DevOps—Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here.
To read all of the blog posts in our DevOps series, please click here.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 08:54am</span>
|
|
By Douglas C. SchmidtPrincipal Researcher
The SEI Blog continues to attract an ever-increasing number of readers interested in learning more about our work in agile metrics, high-performance computing, malware analysis, testing, and other topics. As we reach the mid-year point, this blog posting highlights our 10 most popular posts, and links to additional related resources you might find of interest (Many of our posts cover related research areas, so we grouped them together for ease of reference.)
Before we take a deeper dive into the posts, let’s take a look at the top 10 posts (ordered by number of visits, with #1 being the highest number of visits):
#1. Using V Models for Testing#2. Common Testing Problems: Pitfalls to Prevent and Mitigate#3 Agile Metrics: Seven Categories#4. Developing a Software Library for Graph Analytics#5. Four Principles of Engineering Scalable, Big Data Software Systems #6. Two Secure Coding Tools for Analyzing Android Apps #7. Four Types of Shift-Left Testing#8. Writing Effective YARA Signatures to Identify Malware#9. Fuzzy Hashing Techniques in Applied Malware Analysis#10. Addressing the Software Engineering Challenges of Big Data
TestingUsing V Models for TestingCommon Testing Problems: Pitfalls to Prevent and MitigateFour Types of Shift-Left Testing
Don Firesmith’s blog posts on testing continue to rank among the most visited posts on the SEI Blog. The post Using V Models for Testing, which was published in November 2013, has been the most popular post on our site throughout the first half of this year. In the post, Firesmith introduces three variants on the traditional V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.
The V model builds on the traditional waterfall model of system or software development by emphasizing verification and validation. The V model takes the bottom half of the waterfall model and bends it upward into the form of a V, so that the activities on the right verify or validate the work products of the activity on the left.
More specifically, the left side of the V represents the analysis activities that decompose the users’ needs into small, manageable pieces, while the right side of the V shows the corresponding synthesis activities that aggregate (and test) these pieces into a system that meets the users’ needs.
In the two-part blog post series Common Testing Problems: Pitfalls to Prevent and Mitigate, Firesmith begins by citing a widely known study for the National Institute of Standards & Technology (NIST) reporting that inadequate testing methods and tools annually cost the U.S. economy between $22.2 billion and $59.5 billion, with roughly half of these costs borne by software developers in the form of extra testing and half by software users in the form of failure avoidance and mitigation efforts. The same study notes that between 25 and 90 percent of software development budgets are often spent on testing. In this two-part series (read the first post here; read the second post here), Firesmith highlights results of an analysis documenting problems that commonly occur during testing. Specifically, this series of posts identifies and describes 77 testing problems organized into 14 categories; lists potential symptoms by which each can be recognized, potential negative consequences, and potential causes; and makes recommendations for preventing them or mitigating their effects.
In the post, Four Types of Shift-Left Testing, Firesmith details four basic methods to shift testing earlier in the lifecycle (that is, leftward on the V model). These can be referred to as traditional shift left testing, incremental shift left testing, Agile/DevOps shift left testing, and model-based shift left testing.
Readers interested in finding out more about Firesmith’s work in this field can refer to the following resources:
Book: Common System and Software Testing Pitfalls
Podcast: Three Variations on the V Model for System and Software Testing
Agile Metrics: Seven Categories For agile software development, one of the most important metrics is delivered business value. This progress measure, while observation-based, does not violate the team spirit. A group of SEI researchers began research to help program managers measure progress more effectively. At the same time, we want teams to work in their own environment and use metrics specific to the team, while differentiating from metrics that are used at the program level.
The SEI blog post, Agile Metrics: Seven Categories, details three key views of agile team metrics that are typical of most implementations of agile methods: velocity, spring burn-down chart, and release burn-up chart.
This research, which is presented in greater detail in the SEI technical note Agile Metrics: Progress Monitoring of Agile Contractors, involved interviewing professionals who manage agile contracts, which gave SEI researchers insight from professionals in the field who have successfully worked with agile suppliers in DoD acquisitions.
Based on interviews with personnel who manage agile contracts, the technical note (and blog post) also identify seven successful ways to monitor progress that help programs account for the regulatory requirements common in the DoD.
Readers interested in finding out more about this research can read the following SEI technical reports and notes:
Agile Metrics: Progress Monitoring of Agile Contractors
Considerations for Using Agile in DoD Acquisition
Agile Methods: Selected DoD Management and Acquisition Concerns
DoD Information Assurance and Agile: Challenges and Recommendations Gathered Through Interviews with Agile Program Managers and DoD Accreditation Reviewers
Parallel Worlds: Agile and Waterfall Differences and Similarities
Developing a Software Library for Graph Analytics Graph algorithms are in wide use in DoD software applications, including intelligence analysis, autonomous systems, cyberintelligence and security, and logistics optimizations. In late 2013, several luminaries from the graph analytics community released a position paper calling for an open effort, now referred to as GraphBLAS, to define a standard for graph algorithms in terms of linear algebraic operations. BLAS stands for Basic Linear Algebra Subprograms and is a common library specification used in scientific computation. The authors of the position paper propose extending the National Institute of Standards and Technology’s Sparse Basic Linear Algebra Subprograms (spBLAS) library to perform graph computations. The position paper served as the latest catalyst for the ongoing research by the SEI’s Emerging Technology Center in the field of graph algorithms and heterogeneous high-performance computing (HHPC). This blog post, the second in a series highlighting ETC’s work in high-performance computing, is a follow-up to the 2013 post, Architecting Systems of the Future. This second post describes efforts to create a software library of graph algorithms for heterogeneous architectures that will be released via open source.This post details research that bridges the gap between the academic focus on fundamental graph algorithms and our focus on architecture and hardware issues. The post by the SEI’s Scott McMillan also highlights a collaboration with researchers at Indiana University’s Center for Research in Extreme Scale Technologies (CREST), which developed the Parallel Boost Graph Library (PBGL). In particular, the SEI is working with Dr. Andrew Lumsdaine who serves on the Graph 500 Executive Committee and is considered a world leader in graph analytics. Researchers in this lab worked with us to implement and benchmark data structures, communication mechanisms and algorithms on GPU hardware. Readers interested in finding out more about our work in this area can read the following SEI technical note:
Patterns and Practices for Future Architectures
Big Data Four Principles of Engineering Scalable, Big Data Software Systems Addressing the Software Engineering Challenges of Big Data
New data sources, ranging from diverse business transactions to social media, high-resolution sensors, and the Internet of Things, are creating a digital tsunami of big data that must be captured, processed, integrated, analyzed, and archived. Big data systems that store and analyze petabytes of data are becoming increasingly common in many application domains. These systems represent major, long-term investments, requiring considerable financial commitments and massive scale software and system deployments. With analysts estimating data storage growth at 30 to 60 percent per year, organizations must develop a long-term strategy to address the challenge of managing projects that analyze exponentially growing data sets with predictable, linear costs.
In a popular series on the SEI blog, Ian Gorton continues his exploration of the software engineering challenges of big data systems. In the first post in the series, Addressing the Software Engineering Challenges of Big Data, Gorton describes a risk reduction approach called Lightweight Evaluation and Architecture Prototyping (for Big Data) that he developed with fellow researchers at the SEI. The approach is based on principles drawn from proven architecture and technology analysis and evaluation techniques to help the Department of Defense (DoD) and other enterprises develop and evolve systems to manage big data.
In another post, the tenth most popular on our site in the first six months of 2015, Four Principles of Engineering Scalable, Big Data Software Systems, Gorton describes principles that hold for any scalable, big data system.
Readers interested in finding out more about Gorton’s research in big data can refer to the following additional resources:
Webinar: Software Architecture for Big Data Systems
Podcast: An Approach to Managing the Software Engineering Challenges of Big Data
Two Secure Coding Tools for Analyzing Android Apps
One of the most popular areas of research among SEI blog readers this year has been the series of posts highlighting our work on secure coding for the Android platform. Android is an important of focus, given its mobile device market dominance (82 percent of worldwide market share in the third quarter of 2013the adoption of Android by the DoD, and the emergence of popular massive open online courses on Android programming and security.
Since its publication in late April, the post Two Secure Coding Tools for Analyzing Android Apps, by Will Klieber and Lori Flynn, has been among the most popular on our site. The post highlights a tool they developed, DidFail, that addresses a problem often seen in information flow analysis: the leakage of sensitive information from a sensitive source to a restricted sink (taint flow). Previous static analyzers for Android taint flow did not combine precise analysis within components with analysis of communication between Android components (intent flows). The SEI CERT Division’s new tool analyzes taint flow for sets of Android apps, not only single apps.
DidFail is available to the public as a free download. Also available is a small test suite of apps that demonstrates the functionality that DidFail provides.
The post by Klieber and Flynn is the latest in a series detailing the CERT Secure Coding team’s work on techniques and tools for analyzing code for mobile computing platforms.
Readers interested in finding out more about the CERT Secure Coding Team’s work in secure coding for the Android platform can refer to the following additional resources:
Presentation: Using DidFail to Analyze Flow of Sensitive Information in Sets of Android Apps
Blog Post: An Enhanced Tool for Securing Android Apps
Technical Report: Making DidFail Succeed: Enhancing the CERT Static Taint Analyzer for Android App Sets
SOAP 2014 Workshop Paper: Android Taint Flow Analysis for App Sets
MalwareWriting Effective YARA Signatures to Identify Malware Fuzzy Hashing Techniques in Applied Malware Analysis
Previous SEI Blog posts on identifying malware have focused on applying similarity measures to malicious code to identify related files and reduce analysis expense. Another way to observe similarity in malicious code is to leverage analyst insights by identifying files that possess some property in common with a particular file of interest. One way to do this is by using YARA, an open-source project that helps researchers identify and classify malware. YARA has gained enormous popularity in recent years as a way for malware researchers and network defenders to communicate their knowledge about malicious files, from identifiers for specific families to signatures capturing common tools, techniques, and procedures (TTPs). In the blog post Writing Effective YARA Signatures to Identify Malware, CERT Division researcher David French provides guidelines for using YARA effectively, focusing on selection of objective criteria derived from malware, the type of criteria most useful in identifying related malware (including strings, resources, and functions), and guidelines for creating YARA signatures using these criteria.
YARA provides a robust language (based on Perl Compatible Regular Expressions) for creating signatures with which to identify malware. These signatures are encoded as text files, which makes them easy to read and communicate with other malware analysts. Since YARA applies static signatures to binary files, the criteria statically derived from malicious files are the easiest and most effective criteria to convert into YARA signatures. The post highlights three different types of criteria that are most suitable for YARA signature development: strings, resources, and function bytes.
The simplest usage of YARA is to encode strings that appear in malicious files. The usefulness of matching strings, however, is highly dependent on which strings are chosen.
In the post Fuzzy Hashing Techniques in Applied Malware Analysis, French highlights improved ways of assessing whether two files are similar including fuzzy hashing.
When investigating a security incident involving malware, analysts will create a report documenting their findings. To denote the identity of a malicious binary or executable, analysts often use cryptographic hashing, which computes a hash value on a block of data, such that an accidental or intentional change to the data will change the hash value. In communication science, cryptographic hashes are frequently used to determine the integrity of a message sent through a communication channel. In malware research, they are useful for positively identifying a piece of software. If a suspected file has the same cryptographic hash as a known file, an analyst is reasonably confident that the files are identical. Modifying even a single bit of a malicious file, however, will alter its cryptographic hash. The result is that inconsequential changes to malicious files will prevent analysts from rapidly observing that a suspected file is identical to a file they have already seen.
To counter this behavior, analysts seek improved ways of assessing whether two files are similar. One such method is known as fuzzy hashing. Fuzzy hashes and other block/rolling hash methods provide a continuous stream of hash values for a rolling window over the binary. These methods produce hash values that allow analysts to assign a percentage score that indicates the amount of content that the two files have in common. A recent type of fuzzy hashing, known as context triggered piecewise hashing, has gained enormous popularity in malware detection and analysis in the form of an open-source tool called ssdeep.
Looking Ahead
In the coming months, we will be continuing our series on DevOps, as well as posts on vulnerability analysis tools, predictive analysis, context-aware computing, and the SEI strategic plan. We will also continue our SPRUCE series highlighting recommended practices in the fields of Agile at Scale, Monitoring Software-Intensive System Acquisition Programs, Managing Intellectual Property in the Acquisition of Software-Intensive Systems.
Thank you for your support. We publish a new post on the SEI Blog every Monday morning. Let us know if there is any topic you would like to see covered in the SEI Blog.
We welcome your feedback in the comments section below.
Additional Resources
For the latest SEI technical reports and notes, please visit http://resources.sei.cmu.edu/library/.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 08:53am</span>
|
|
According to our new study on communication in relationships, couples who argue effectively are ten times more likely to have a happy relationship than those who sweep difficult issues under the rug.
And what are the most difficult topics couples usually avoid or harmfully debate? The study found that the three most difficult topics for couples to discuss are sex, finances, and irritating habits. Other interesting statistics include:
Four out of five say poor communication played a role in their last failed relationship and half cite poor communication as a significant cause of the failed relationship.
Fewer than one in five believe they are usually to blame when a conversation goes poorly.
Those who blame their partner for poor communication are more likely to be dissatisfied with the relationship.
Many couples operate under the myth that when they avoid discussing sensitive issues, they avoid an argument. And most couples mistakenly assume that avoiding an argument is ultimately a win for the relationship. However, what we don’t talk out, we eventually act out. In reality, it’s not how much you argue, but the way in which you debate sensitive issues that ultimately determines the success of your relationship. The good news is that with the right set of skills, crucial conversations can strengthen your relationship.
Here are five tips for effectively holding crucial conversations with your significant other:
Manage your thoughts. Soften your judgments by asking yourself why a reasonable, rational, and decent person would do what your significant other is doing.
Affirm before you complain. Don’t start by diving into the issue. Establish emotional safety by letting your significant other know you respect and care about him or her.
Start with the facts. When you begin discussing the issue, strip out accusatory, judgmental, and inflammatory language.
Be tentative but honest. Having laid out the facts, tell your significant other why you’re concerned—but don’t do it as an accusation, share it as an opinion.
Invite dialogue. After sharing your concerns, encourage your significant other to share his or hers—even if he or she disagrees with you. If you are open to hearing your significant other’s point of view, he or she will be more open to yours.
Related Material:Crucial Applications: Overcoming the "Nasty versus Nice" Debate
Crucial Applications: Talking About Holiday Finances
Crucial Applications: Delivering Bad News
Stacy Nelson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 08:49am</span>
|
|
ABOUT THE AUTHOR
Joseph Grenny is coauthor of four New York Times bestsellers, Change Anything, Crucial Conversations, Crucial Confrontations, and Influencer.
READ MORE
Dear Crucial Skills,
I have a coworker who abuses my open-door communication policy. Our offices are side-by-side, and we both benefit from this arrangement by discussing dilemmas and sorting through issues to prioritize our group’s efforts.
However, my coworker has a very reactive way of coping with an e-mail she does not like or a phone call from someone who disagrees with her. She will come rushing into my office to rant about this e-mail or that coworker, or this phone call or that situation. This happens five to six times a day! This behavior is distracting because she expects me to put aside what I’m working on to pay attention to her. She’s also thin-skinned, very volatile, and I suspect less than receptive to a conversation that centers on her negative behavior. Any suggestions?
Signed,
Open-Door Abuse
Dear Open-Door,
This is an interesting question because it’s hard to say which issue you should address.
The first skill of crucial conversations is picking the right conversation. Your two options are:
Reset expectations. This one is fairly straightforward. The key is to make it about you and not the other person. This is you realizing you need a different boundary in order to be productive in your work—not blaming your coworker for interrupting you. If you set it up that way, there is minimal chance of defensiveness.
Address your coworker’s volatile behavior. There are two reasons to address this issue first. One reason is if you think—no matter how careful you are—you’ll be unable to focus on resetting expectations. If this is true, then you have to address your coworker’s volatile behavior first. The second reason is if it is more important to address her behavior than it is to reset expectations. When you use words like "volatile," it sounds as though you may have been putting up with abuse for some time and even enabling her misbehavior by not asking for things you want or need in your work relationship. If this is true, you have to hold an entirely different crucial conversation.
If you decide to reset expectations, as I said, make it about you and your needs—not a criticism of your colleague. This is both true and easier to express without creating defensiveness. Go in with a specific proposal—not just a vague criticism. For example, you might simply say, "I’ve noticed that I go home many times feeling disappointed in how much I get done. I’ve realized that one reason is that I don’t focus. I am going to start creating "islands of focus" in my day—when I do not respond to e-mail, talk with colleagues, or schedule meetings. This will put a cramp in the spontaneous conversations we sometimes have, but I want to try this. Can I ask that from 1:00 - 4:00 p.m. you not tempt me with interesting topics?"
You’ll then need to maintain this agreement and give reminders if there are encroachments. If you don’t, then you will be colluding in undermining your own request. So be firm and consistent—odds are it will only take a couple of reminders and you’ll have a bit of solitude.
Confronting her behavior will be more difficult. I might be reading more into this than I should—but I’m inferring not just to volatility (i.e., she gets animated when expressing frustrations) but to hostility (she is defensive and rude when you confront her about concerns). If I am correct, you may want to hold her accountable on this issue. You may also want to give some thought to how you may be rewarding this pattern by allowing it to cause you to tiptoe around other behaviors that don’t work for you (like constant interruptions). Over time, a weakness like this can turn into a technique when those around her reward it too consistently.
If you decide to address this issue, once again, start with safety. When confronting a longstanding pattern that you’ve colluded in, a good way to do this is to acknowledge your part. For example, "I’d like to discuss a concern that I’ve put off addressing for a long time. I realize the pattern we’ve fallen into is as much my fault as yours—as I’ve been staying silent and blaming you for my silence. I’d like to discuss the problem—including how I might be contributing to it—and find a way to work together that is acceptable to both me and you."
From here, you’ll need to describe two or three examples of the pattern. Be careful, because each time you describe an instance, she’s likely to offer excuses for that instance. For example, you might say, "Last week when I pointed out misspellings on your PowerPoint slide you called me a loser. Then laughed and walked away." If she then says, "I was joking!" You need to return her to the pattern. Say something like, "I realize there might be special reasons you said things in each circumstance I raise. And yet, what I’m asking you to notice it that there is a pattern—one that is unacceptable to me. If it happened just once, I wouldn’t be discussing this. This is something that happens regularly. Can you see that?"
This will be tricky, but the key is to maintain safety while being fully honest. You need to begin exercising a firmness you have not in the past. If you do, there is a good chance you can get closer to the kind of relationship that will work for you.
Best wishes,
Joseph
Related Material:Influencing Corporate Policy
What if the other person refuses to open up?
Stacy Nelson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 08:48am</span>
|
|
ABOUT THE AUTHOR
David Maxfield is coauthor of two New York Times bestsellers, Change Anything and Influencer.
READ MORE
Dear Crucial Skills,
I have two employees who are categorized as management yet they do not have any direct reports nor do their job descriptions indicate any responsibilities specific to management. Because it is a large company, I am unable to modify the job classification.
I would like to delegate increased responsibility to their role, but there is also an issue of trust. These two employees do not have the desire to grow as leaders. They are content with working their eight hours a day and going home. As much as I try to help them develop, they just aren’t interested.
Do you have any suggestions for motivating or developing managers?
Motivated Manager
Dear Motivated,
Thanks for describing an interesting influence challenge that many managers face. Organizations ask managers to develop their people, and the workload makes it important for people to take on larger roles, but some employees seem comfortably stuck in their status quo.
Or maybe you’re a mom or dad whose son or daughter is comfortably stuck in the status quo—or whatever you call their basement bedroom. You want your child to launch a career, but he or she doesn’t seem interested in doing what it takes.
How do you get a person who is comfortably stuck to take action?
Avoid the fundamental attribution error. When people are stuck, we have a strong tendency to blame their personal motivation. More often than not, we describe them as lacking character, willpower, grit, or determination. This bias is so strong that psychologists call it the "Fundamental Attribution Error." However, when a person is stuck—even comfortably stuck—there is usually a lot more going on than simple laziness.
I’m not saying the employees you described aren’t lacking personal motivation. I think you described their poor initiative quite well; however, there is a good chance that personal motivation is not their only problem—it’s just the most obvious one.
Diagnose all six sources. When people are stuck, it’s usually because all Six Sources of Influence are working in combination to hold them fast. Their world is perfectly organized to create the behavior (or lack of behavior) you are currently seeing. Here are the questions we use to diagnose obstacles in all six sources:
Personal Motivation. Left in a room by themselves, would they want to take on greater responsibilities? Would they enjoy it, find it meaningful, and aspire to it as an important part of their identity? Would they take pride in it, or see it as a moral imperative? Ideas for action:
Invite choice. As part of the performance-management process, ask each employee to prepare a two- to three-year plan. Ask them to identify the strengths, weaknesses, opportunities, and threats (SWOT) your organization and your department face. Then have them anticipate how they see your department and their jobs changing in order to take advantage of these SWOTs. Finally, have them describe what they would like to be doing in two or three years and what they need to do now in order to prepare themselves.
Try small steps. Identify the crucial moments when it would be most helpful for your employees to step up to greater responsibilities. Think of times, places, and circumstances when you could really use their help in a particular way for a short period of time. It will be most effective if you can include them in finding these crucial moments. People are more trusting when they discover crucial moments for themselves. Then ask for their help during these brief and occasional crucial moments.
Personal Ability. Left in a room by themselves would they have all the skills they need to feel confident taking on greater responsibilities? Do they already have the right knowledge, skill sets, experiences, training, and strength? Ideas for action:
Training that focuses on critical dependencies. Ask your reluctant employees to identify skill sets that are new, are becoming more important, or are in short supply. These skills would make a person indispensable. If they aren’t quick to identify these skills, work with them to identify the people in your organization who could help and ask your employees to interview them.
Training that fills in missing skills. Suppose your reluctant employees did accept a greater role, what parts of an expanded job would they find most difficult, tedious, or noxious? How could you skill them up so they’d be confident, efficient, and effective in these areas? We often say, "If it’s taking too much will, add some more skill!" Maybe an ounce of skill will yield another pound of motivation.
Social Motivation. Are the right people encouraging them to take on greater responsibilities? Do the peers they respect, the managers they look up to, and their family members encourage or discourage them from stepping up? Ideas for action:
Get them some feedback. Do they know how others see them? Most of us want to believe we are doing our fair share. Motivate change by using a 360-degree feedback tool to get feedback from their peers and customers. Make it clear that the feedback is for development—not evaluation—purposes and make sure you have solutions for whatever negative feedback they receive. Otherwise, this kind of feedback can be more demoralizing than motivating.
Connect them with a greater purpose. Get them involved in field trips where they meet with their internal or external customers. Make the connection as personal as possible. Have them report to your team on what they learned and on how your team can improve.
Social Ability. If your employees take on greater responsibilities, are the people around them ready to lend a hand? Do they have mentors, trainers, and peers who can give advice and step in to help? Ideas for action:
Make them coaches. Sometimes people step up when they become responsible for someone else’s success. Consider assigning them to work with another person in your group.
Structural Motivation. Does your organization provide incentives such as performance reviews, pay, promotions, and perks that could motivate these employees to take on greater responsibilities? Your employees’ job descriptions don’t include management activities so it’s hard to use the formal reward system, but there may be other routes to explore. Ideas for action:
Recognize incremental improvements. Try small assignments, projects that can be completed within a week, and then give your honest, heartfelt appreciation when they complete them. Then gradually increase the number, size, duration, and importance of these projects. Continue to show your appreciation as you deem appropriate.
Structural Ability. Is there a way to use the environment, data, tools, cues, or systems to make it easier and more convenient for these people to take on greater responsibilities? Ideas for action:
Discover and remove obstacles. Ask yourself (or your reluctant employees), "If you wanted to take on a few additional responsibilities, what are the biggest obstacles you would face?" One good guess would be time. If nothing else about their jobs changed, they would have to work longer, harder days. How could you change that? What could you take off their plates so they would have more time for higher-value work? Showing your flexibility may encourage them to become more flexible as well.
I hope these ideas help you generate more strategies tailored to your exact situation. Notice all these ideas involve an investment of time, energy, and thought on your part. It would be easier to write off the employees as unmotivated slugs, but that would mean abdicating your own responsibilities as a manager. It would also be a very costly write-off, since they are likely to remain on your payroll.
Whether you’re dealing with reluctant employees or a child who is still living in your basement, never lose faith! When you marshal the power of all Six Sources of Influence, you can truly change anything.
David
Related Material:Change Anything: Motivating Weight Loss
Putting Skills into Action
Does the path to action still include telling a story?
Stacy Nelson
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Sep 10, 2015 08:47am</span>
|







