This is a test blog.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:24am</span>
By Douglas C. SchmidtPrincipal Researcher Happy Memorial Day. As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in architecture analysis, patterns for insider threat monitoring, source code analysis and insider threat security reference architecture. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website. What’s New in V2 of the Architecture Analysis & Design Language Standard? By Peter H. Feiler, Joe Seibel, & Lutz Wrage This report provides an overview of changes and improvements to the Architecture Analysis & Design Language (AADL) standard for describing both the software architecture and the execution platform architectures of performance-critical, embedded, and real-time systems. PDF Download A Pattern for Increased Monitoring for Intellectual Property Theft by Departing InsidersBy Andrew P. Moore, Michael Hanley & Dave Mundie This report presents an example of an enterprise architectural pattern, "Increased Monitoring for Intellectual Property (IP) Theft by Departing Insiders," to help organizations plan, prepare, and implement a means to mitigate the risk of insider theft of IP. PDF Download Source Code Analysis Laboratory (SCALe) By Robert C. Seacord, Will Dormann, James McCurley, Philip Miller, Robert W. Stoddard, David Svoboda & Jefferson Welch This report details the CERT Program's Source Code Analysis Laboratory (SCALe), a proof-of-concept demonstration that software systems can be conformance tested against secure coding standards, and provides an analysis of selected software systems. PDF Download Insider Threat Security Reference Architecture Joji Montelibano & Andrew P. Moore This technical report describes the Insider Threat Security Reference Architecture (ITSRA), an enterprise-wide solution to the threat to organizations from its own insiders. The ITSRA draws from existing best practices and standards as well as from analysis of real insider threat cases to provide actionable guidance for organizations to improve their posture against the insider threat. PDF Download Additional Resources For the latest SEI technical reports and papers, please visit www.sei.cmu.edu/library/reportspapers.cfm
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:24am</span>
By David Keaton, ResearcherThe CERT Secure Coding Program Buffer overflows—an all too common problem that occurs when a program tries to store more data in a buffer, or temporary storage area, than it was intended to hold—can cause security vulnerabilities. In fact, buffer overflows led to the creation of the CERT program, starting with the infamous 1988 "Morris Worm" incident in which a buffer overflow allowed a worm entry into a large number of UNIX systems. For the past several years, the CERT Secure Coding team has contributed to a major revision of the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) standard for the C programming language. Our efforts have focused on introducing much-needed enhancements to C and its standard library to address security issues, such as buffer overflows. These security enhancements include (conditional) support for bounds-checking interfaces, (conditional) support for analyzability, static assertions, "no-return" functions, support for opening files for exclusive access, and the removal of the insecure gets() function. This blog posting explores two of the changes—bounds-checking interfaces and analyzability—from the December 2011 revision of the C programming language standard, which is known informally as C11 (each revision of the standard cancels and replaces the previous one, so there is only one C standard at a time). I work on the CERT Secure Coding team, where I’ve made technical contributions to the definition of the new C features for addressing security. I’ve also chaired Task Group PL22.11 (programming language C) of the International Committee for Information Technology Standards (INCITS), representing the United States. Working with SEI colleagues Robert C. Seacord and David Svoboda, I helped develop, refine, and introduce many of the security enhancements to this major ISO standard revision. Bounds Checking Interfaces Until the latest update of the C standard, its security features had been limited to the snprintf() function, which was introduced in 1999 and whose implementations have some quirks. Previous iterations of the C library contained functions that did not perform automatic bounds checking.  Instead, C library implementations assume programmers provide output character arrays that are large enough to hold the result and return a notification of failure if they were not large enough. The C standard now includes a library that provides extensions that can help mitigate security vulnerabilities, including bounds-checking interfaces. For example, the strcpy() copy function in previous versions of the standard C library did not check the bounds of the array into which it was copied. A buffer overflow will occur, therefore, if a programmer uses strcpy() to copy a larger string into a small array and does not explicitly check the bounds of the array prior to making the call to strcpy(). One remedy to the strcpy() problem is to use the strncpy() function, which provides bounds, but won’t terminate the string with a null character (whose value is 0) if there’s insufficient space. Situations like this create a vulnerability because data can be written past the end of the array, overwriting other data and program structures. This buffer overflow vulnerability can be (and has been) misused to run arbitrary code with the permissions of the defective program. If the programmer writes runtime checks to verify lengths before calling library functions, then those runtime checks frequently duplicate work done inside the library functions, which discover string lengths as a side effect of doing their job. The new bounds-checking interface provides strcpy_s(), a more secure string copy function that not only checks the bounds of the array that it is copying into, but also ensures that the string is terminated by a null character. Analyzability Another aspect of the C programming language we focused on in C11 is analyzability, which deals with so-called "undefined" behavior.  Undefined behavior arises when a programmer uses a nonportable or erroneous program construct or erroneous data for which the C standard does not impose a requirement. The C standard includes several areas of the C language with undefined behavior because behavior of those areas depends on compiler implementation details.  An example is signed integer overflow.  Different hardware behaves differently on signed integer overflow, so trying to make the language mandate one method of dealing with it would negatively affect performance on some systems because the standard behavior would not match what the hardware does. There are many areas in which the standard makes accommodations for various kinds of hardware, and they are all lumped together into the undefined behavior category.  Since the C standard doesn’t constrain how a compiler implements undefined behavior, it could conceivably do anything, such as cause the machine to halt and catch fire, though compiler writers who do this might not find many professional programming customers!  We examined this issue and realized that in practice, there are two categories of undefined behavior: behavior for which we really cannot say what will happen, such as storing data outside the bounds of an object, and behavior where the implementation really should do something reasonable, such as signed integer overflow We created the Analyzability Annex in C11, in which we labeled the former behavior "critical undefined behavior," indicating that the consequences could be serious.  The latter category we called "bounded undefined behavior," because we can say with certainty that nothing unpredictable should be allowed to happen as a result. The category of critical undefined behavior is a small subset of undefined behavior, which means that most undefined behavior becomes bounded.  We didn't have to change the spirit of the C language to do this, because all we did was specify that bounded undefined behavior is not allowed to store data outside the bounds of an object.  In the example of signed integer overflow, this means the compiler runtime implementation could choose to return some reasonable result, cause a trap that terminates the program, or simply print a message and move on.  As long as it does not perform an out-of-bounds store, anything is permissible. The bounding of undefined behavior allows analysis tools to know that a C program will not have unpredictable behavior except in a very small set of circumstances, which is why we called it the Analyzability Annex. Other Areas of Research While our work to date has focused on the ISO C standard and helping programmers prevent critical undefined behaviors, the CERT secure coding team has also been working on the CERT C Secure Coding Standard, which contains a set of rules and guidelines to help programmers code securely.  Those guidelines, which will be the subject of an upcoming blog post, leverage our work on the ISO standard to help programmers avoid undefined behavior, as well as behavior that programmers might not have expected when writing their code.  The CERT C Secure Coding Standard also serves as a foundation for the Source Code Analysis Lab (SCALe),  which is our software auditing service that can be used to find vulnerabilities and weaknesses in any codebase. SCALe uses a suite of static analysis and dynamic analysis tools to find vulnerabilities in a codebase, based on the patterns and guidelines defined in the CERT C Secure Coding Standard. Additional Resources For more information about the new ISO standard for the C programming language, please visit www.open-std.org/jtc1/sc22/wg14/.  The C standard is available for purchase in the ANSI Web Store. For more information about the work of the CERT Secure Coding Team, please visit www.cert.org/secure-coding/. For more information on the CERT Source Code Analysis Lab (SCALe), please visit www.cert.org/secure-coding/scale/.  
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:23am</span>
Second Installment in a Three-Part SeriesBy Bill Nichols, Senior Member of the Technical StaffSoftware Engineering Process Management This post is the second installment in a three-part series that explains how Nedbank, one of the largest banks in South Africa, is rolling out the SEI’s Team Software Process (TSP)—a disciplined and agile software process improvement method—throughout its IT organization.  In the first post of this series, I examined how Nedbank addressed issues of quality and productivity among its software engineering teams using TSP at the individual and team level. In this post, I will discuss how the SEI worked with Nedbank to address challenges with expanding and scaling the use of TSP at an organizational level. Nedbank is one of several relatively large companies to successfully pilot TSP and undertake an organizational rollout.  Scaling TSP to larger organizations uncovers new challenges and introducing the agile concept of empowered teams across organizations requires attention to change management. There are two broad categories of challenges associated with scaling TSP to larger organizations: The logistics of rollout and sustainment, such as support, training, and resource coordination The change management problem of motivating the staff involved to take on different roles and adopt different patterns of communication Nedbank addressed these challenges while incorporating TSP into its development methods.  Nedbank’s largest IT operations include several sites in Johannesburg; staff involved in the rollout are also located in Capetown, Paarl, Pretoria, and Harare (Zimbabwe.)   As I discussed in the first post in this series, the pilot projects improved Nedbank’s software quality, reduced costs, and enhanced estimation accuracy. The keys to these improvements were realistic planning, disciplined and empowered teams following defined processes, early and objective feedback, and validation.  These qualities were required throughout the rollout to ensure management remained committed to TSP. Our guidance, based on experience with change management and previous TSP rollouts, was to Assemble a project team to plan and execute the rollout and manage expectations of the stakeholders. Develop internal coaching and instructor capacity anticipating foreseeable needs. Deploy to individual groups of staff initially (don’t attempt 100 percent deployment too quickly). Actively market the initiative to development staff to obtain buy-in. Use data to market to management to maintain sponsorship. We recommended sponsorship of the rollout process by a high-level executive, with an operational "champion" leading the rollout team at the organizational level. Support at the organizational level provides budgeting certainty to give staff confidence that the TSP effort has long-term commitment.  Organizations must also coordinate budgets to ensure resources that support TSP are available when needed. These resources, such as meeting rooms or administrative staff, aren’t used on a daily basis, but rather occasionally, necessitating the need for an organizational support infrastructure. Initial budgeting is also essential because start-up costs are often a barrier for organizations. A Center of Excellence to Manage the Rollout A process improvement group sometimes handles rollout. Nedbank chose to implement its rollout with a funded and staffed Center of Excellence (COE). The COE provided an organizational home for the operational champion and his team (including coaches and instructors), explicit budgeting for rollout activities across the organization, and a focal point for managing the change internally.  The COE also addresses the sensitive and sometimes contentious issues of rollout, including resource allocation, project selection, coordination of training, coach selection and training, development team support, and rollout evaluation.  This choice in organizational structure was unique in our experience. It was also effective, which became clear as the project moved forward and the organization was prepared for rollout. Marketing to Developers Before a full organizational rollout, successful pilot projects are needed to validate that the process works and to get positive references from the pilot participants. Word-of-mouth promotion from peer developers who worked on the pilots helps overcome resistance to change from other development teams throughout the organization. Pilot developers can spread the message that TSP is agile and empowering. The empowerment associated with agile practices can sell itself only after the word gets out. Nedbank produced a video from the first pilot project to communicate how the change had it benefited the pilot staff’s quality of work life. Another internal marketing approach used by the COE was to provide developers a comfortable and supportive work environment, and reinforce the sense that this was an important change. The COE scheduled space, ensured the allocation of specific work time for training and team launch, provided lunch and snacks, and prepared welcome packs with themed note pads and pens.   Removing some of the logistical barriers was important, but less than how the actions demonstrated the importance to the company. While managing logistical problems was critical, even more importantly, the effort provided a credible demonstration to staff that the TSP initiative was a priority. Building Coaching Capacity While there are other ways to support coaching as part of a TSP rollout, Nedbank is doing so through its COE. Part of the COE‘s job is to select and train coach candidates, then provide them with an organizational career path. The two initial coaches trained during pilots were soon supplemented by another group of six. As the organization rolled out TSP, we recommend that Nedbank identify candidates from working TSP teams because those employees had enough experience to make fully informed decisions. With only two pilot projects, this was not practical at Nedbank; however, at least one member of the pilot teams did enter the coaching program.  To augment the available coaches, we plan another coach class later this calendar year. Training the Coaches During the early rollout, most project teams will be using TSP for the first time. These teams require a coach for launch planning, stakeholder facilitation, launch, weekly coaching, process checkpoints, and post mortem.  While SEI staff or partners provide coaching for pilot projects, the organization must identify and train internal coaches for the rollout. The TSP coach will have substantial extra work during this period because TSP will be new to the teams, line management, project managers, team leads, business analysts, and other stakeholders.  It is the coach’s responsibility to ensure TSP is used properly. Training a TSP coach requires a minimum of several months, often up to a year. Coaches require full PSP developer training, a week-long coach class, and mentoring through initial coaching activities before taking a certification exam. The training is rigorous because the coaches are the front lines of both organizational change and organizational project performance. Due to the time required to fully certify a coach, coaching is a major constraint in the rollout process. At Nedbank, the COE selects coach candidates, secures funding, schedules training, and deploys coaching and training to the projects. Coaches are responsible for maintaining team performance and helping stakeholders balance needs.  The SEI role at this stage is to provide coach training and mentoring. TSP coaches receive instruction in organizational change management, but operate mostly at the individual and team level. The COE addresses change management at the organizational level. The COE first assures that management, team leads, developers and other staff have received the standard TSP training. The COE then offers specific seminars or short courses for developers, team leads, senior management, and non-developer team members. At Nedbank, most training was performed by the Johannesburg Center for Software Engineering (JCSE). Due to limited instructor availability, Nedbank is now looking to train internal instructors, as well. Deployment progresses slowly at first when there is a limited supply of coaches. A common mistake is to provide TSP training simultaneously to everyone in the organization, while only a subset of the trained employees begins working on TSP projects immediately. Our experience has been that this approach fails because the employees’ TSP skills degrade without use. It is important to train team members within a short window—several weeks at most—prior to a launch. The COE, therefore, allocates training only after a project has been approved and a coach designated. The obvious problem—that this can delay project start by up to several weeks—has no simple solution; however, this is not a problem for longer duration projects (exceeding several months). Marketing to Management The COE establishes a set of organization-wide expectations, aligned with Nedbank business goals of reducing cost and cycle time, for the TSP projects. The coaches provide the COE with project summaries, including counts of projects completed, cost and schedule estimation accuracy, data quality, resource estimation accuracy, schedule accuracy, and issues found in QA and production.  The COE explains to executive management the project’s progress and how the project’s management dashboard aligns with organizational goals. What’s Next The COE-based approach described above allowed Nedbank to verify at the organizational level that the rollout was on track, provided credible data, and maintained sponsorship with the executive management board.  The third and final post in this series will examine how the Nedbank approach addressed key challenges alluded to by Jeff Sutherland, co-creator of the Scrum agile method, in his 10-year retrospective on agile methods.  If you’re interested in learning more about TSP, please consider attending the upcoming TSP Symposium in St. Petersburg, Florida, USA. Additional Resources For more information about the 2012 TSP Symposium, please visit www.sei.cmu.edu/tspsymposium/2012/ For more information about TSP, please visitwww.sei.cmu.edu/tsp To read the SEI technical report Deploying TSP on a National Scale: An Experience Report from Pilot Projects in Mexico, please visit www.sei.cmu.edu/library/abstracts/reports/09tr011.cfm To read the Crosstalk article A Distributed Multi-Company Software Project by Bill Nichols, Anita Carleton, & Watts Humphrey, please visit www.crosstalkonline.org/storage/issue-archives/2009/200905/200905-Nichols.pdf To read the SEI book Leadership, Teamwork, and Trust: Building a Competitive Software Capability by James Over and Watts Humphrey, please visit www.sei.cmu.edu/library/abstracts/books/0321624505.cfm To read the SEI book Coaching Development Teams by Watts Humphrey, please visitwww.sei.cmu.edu/library/abstracts/books/201731134.cfm To read the SEI book PSP: A Self-Improvement Process for Engineers by Watts Humphrey please visitwww.sei.cmu.edu/library/abstracts/books/0321305493.cfm
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:19am</span>
Second of a Two-Part SeriesBy Donald FiresmithSenior Member of the Technical StaffAcquisition Support Program In the first blog entry of this two part series on common testing problems, I addressed the fact that testing is less effective, less efficient, and more expensive than it should be. This second posting of a two-part series highlights results of an analysis that documents problems that commonly occur during testing. Specifically, this series of posts identifies and describes 77 testing problems organized into 14 categories; lists potential symptoms by which each can be recognized; potential negative consequences, and potential causes; and makes recommendations for preventing them or mitigating their effects. Why Testing is a Problem A widely cited study for the National Institute of Standards & Technology (NIST) reports that inadequate testing methods and tools annually cost the U.S. economy between $22.2 billion and $59.5 billion, with roughly half of these costs borne by software developers in the form of extra testing and half by software users in the form of failure avoidance and mitigation efforts. The same study notes that between 25 percent and 90 percent of software development budgets are often spent on testing. Despite the huge investment in testing mentioned above, recent data from Capers Jones shows that the different types of testing are relatively ineffective. In particular, testing typically only identifies from one-fourth to one-half of defects, while other verification methods, such as inspections, are typically more effective. Inadequate testing is one of the main reasons that software is typically delivered with approximately 2 to 7 defects per thousand lines of code (KLOC). While this may seem like a negligible number, the result is that major software-reliant systems are being delivered and placed into operation with hundreds or even thousands of residual defects.  If software vulnerabilities (such as the CWE/SAN Top 25 Most Dangerous Software Errors) are counted as security defects, the rates are even more troubling. Overview of Different Types of Testing Problems The first blog entry in this series covered the following general types of problems that are not restricted to a single kind of testing: test planning and scheduling problems stakeholder involvement and commitment problems management-related testing problems test organization and professionalism problems test process problems test tools and environments problems test communication problems requirements-related testing problems The remainder of this second post focuses on the following six categories of problems, each restricted to one of the following types of testing: unit testing integration testing specialty engineering testing system testing system of system testing regression testing Unit testing problems primarily occur during the testing of individual software modules, typically by the same person who developed it in the first place. Design volatility could be causing excessive iteration of the unit test cases, drivers, and stubs. Unit testing could suffer from a conflict of interest as developers naturally want to demonstrate that their software works correctly while testers should seek to demonstrate that software fails. Finally, unit testing could be poorly and incompletely performed because the developers think it is relatively unimportant. Integration testing problems occur during the testing of a set of units integrated into a component, a set of components into a subsystem, a set of subsystems into a system, or a set of systems into a system of systems. Integration testing concentrates on verifying the interactions between the parts of the whole. One potential problem is the difficulty of localizing defects to the correct part once the parts have been integrated. A second potential problem is inadequate, built-in test software that could help locate the cause of any failed test.  Finally, a third problem is the potential lack of availability of the correct (versions of the) parts to integrate. Specialty engineering testing problems occur when an inadequate amount of specialized testing of various quality characteristics and attribute testing takes place.  More specifically, these problems involve inadequate capacity, concurrency, performance, reliability, robustness (e.g., error and fault tolerance), safety, security, and usability testing. While these are the most commonly occurring types of specialty engineering testing problems, other types of specialty testing problems may also exist depending on which quality characteristics and attributes are important  (and thus the type of quality requirements that have been specified). System testing problems occur during system-level testing and often cannot be eliminated because of the very nature of system testing. At best, recommended solutions can only mitigate these problems. Is it hard to test an integrated system’s robustness (support for error, fault, and failure tolerance) due to the challenges of triggering system-internal exceptions and tracing their handling. System-level testing can be hard because temporary test hooks have typically been removed so that one is testing the actual system to be delivered. As with integration testing problems, demonstrating that system tests provide adequate test coverage is hard because reaching a specific code (e.g., fault tolerance paths) by only using inputs to the black-box system is hard. Finally, there is often inadequate mission-thread-based testing of end-to-end capabilities because system-testing is often performed using use-case-based testing, which is typically restricted to interactions with only a single, primary, system-external actor. System-of-Systems (SoS) testing problems are often the result of SoS governance problems (i.e., everything typically occurs at the system-level rather than SoS-level). For example, SoS planning may not adequately cover SoS testing. Often, no organization is made explicitly responsible for SoS testing. Funding is often focused at the system-level, leaving little/no funding for SoS testing. Scheduling is typically performed only at the individual system level, and system-level schedule-slippages make it hard to schedule SoS testing. SoS requirements are also often lacking or of especially poor quality, making it hard to test the SoS against its requirements.  The individual system-level projects rarely allocate sufficient resources to support SoS testing. Defects are typically tracked only at the system level, making it difficult to address SoS-level defects. Finally, there tends to be a lot of finger-pointing and shifting of blame when SoS testing problems arise and SoS testing uncovers SoS-level defects. Note that a SoS almost always consists of independently governed systems that are developed, funded, and scheduled separately. SoS testing problems therefore do not refer to systems that are developed by a prime contractor or integrated by a system integrator, nor do they refer to subsystems developed by subcontractors or vendors. Regression testing problems occur during the performance of regression testing, both during development and maintenance. Often, there is insufficient automation of regression testing, which makes regression testing too labor-intensive to perform repeatedly, especially when using an iterative- and incremental-development-cycle. This overhead is one of the reasons that regression testing may not be performed as often as it should be. When regression testing is performed, its scope is too localized because software developers think that changes in one part of the system will not propagate to other parts, and thereby cause faults and failures. Low-level regression testing is commonly easier to perform than higher-level regression testing, which results in an over-reliance on low-level regression tests. Finally, the test resources created during development may not be delivered and thus may not be available to support regression testing during maintenance. Addressing Test-type Specific Problems For each testing problem described above, I have documented several types of information useful for understanding the problem and implementing a solution.  This information will be appearing in an upcoming SEI technical report.  As an example of what will appear in this report, the testing problem "Over-reliance on COTS Testing Tools" has been documented with the information described below Description. Too few of the regression tests are automated.Potential symptoms. Many or even most of the tests are being performed manually.Potential consequences.  Manual regression testing takes so much time and effort that it is not done. If performed, regression testing is rushed, incomplete, and inadequate to uncover sufficient number of defects. Testers are making an excessive number of mistakes while manually performing the tests. Defects introduced into previously tested subsystems/software while making changes may remain in the operational system. Potential causes. Testing stakeholders (e.g., managers and the developers of unit tests) may mistakenly believe that performing regression testing is neither necessary nor cost effective because of the minor scope of most changes system testing will catch any inadvertently introduced integration defects they are overconfident that changes have not introduced any new defects Testing stakeholders may also not be aware of the importance of regression testing value of automating regression testing Other potential causes may include Automated regression testing may not be an explicit part of the testing process. Automated regression testing may not be incorporated into the Test and Evaluation Master Plan (TEMP) or System/Software Test Plan (STP). The schedule may contain little or no time for the development and maintenance of automated tests. Tool support for automated regression testing may be lacking (e.g., due to insufficient test budget) or impractical to use. The initially developed automated tests may not be maintained. The initially developed automated tests may not be delivered with the system/software. Recommendations. Prepare by explicitly addressing automated regression testing in the project’s TEMP or STP test process documentation (e.g., procedures and guidelines) master schedule work break down structure (WBS) Enable solution of the problem by providing training/mentoring to the testing stakeholders in the importance and value of automated regression testing providing sufficient time in the schedule for automating and maintaining the tests providing sufficient funding to pay for automated test tools ensuring that adequate resources (staffing, budget, and schedule) are planned and available for automating and maintaining the tests Perform the following tasks: Automate as many of the regression tests as is practical. Where appropriate, use commercially available test tools to automate testing. Ensure that both automated and manual test results are integrated into the same overall test results database so that test reporting and monitoring are seamless. Maintain the automated tests as the system/software changes. Deliver the automated tests with the system/software. When relevant, identify this problem as a risk in the project risk repository. Verify that The test process documentation addresses automated regression testing. The TEMP / STP and WBS address automated regression testing. The schedule provides sufficient time to automate and maintain tests. A sufficient number of the tests have been automated. The automated tests function properly. The automated tests are properly maintained. The automated tests are delivered with the system/software. Related problems. no separate test plan, incomplete test planning, inadequate test schedule, unrealistic testing expectations / false sense of security, inadequate test resources, inadequate test maintenance, over-reliance on manual testing, tests not delivered, inadequate test configuration management (CM) Benefits of Using the Catalog of Common Testing Problems This analysis of commonly occurring testing problems—and recommended solutions—can be used as training materials to better learn how to avoid, identify, and understand testing problems and mitigate them. Like anti-patterns, these problem categories can be used to improve communication between testers and testing stakeholders. This list can also be used to categorize problem types for metrics collection. Finally, they can be used as a checklist when producing test plans and related documentations evaluating contractor proposals evaluating test plans and related documentation (quality control) evaluating as-performed test process (quality assurance) identifying test-related risks and their mitigation approaches Future Work The framework of testing problems outlined in this series is the result of more than three decades of experience in assessments and my involvement in numerous projects and discussions with testing subject matter experts. Even after all this time, however, several unanswered questions remain that I intend to be the subject of future study: Probabilities. Which of these problems occur most often? What is the probability distribution of these problems? Which problems tend to cluster together? Do different problems tend to occur with different probabilities in different application domains such as commercial versus governmental versus military and web versus information technology versus embedded systems, etc.)? Severities. Which problems have the largest negative consequences? What are the probability distributions of harm caused by each problem? Risk. Based on the above probabilities and severities, which of these problems cause the greatest risks? Given these risks, how should one prioritize the identification and resolution of these problems? I am interested in turning my work on this topic thus far into an industry survey and perform a formal study to answer these questions. I welcome your feedback on my work to date in the comments section below. Additional Resources To view a presentation on this work, please view, Common Testing Problems: Pitfalls to Prevent and Mitigate, and the associated Checklist Including Symptoms and Recommendations, which were presented at the FAA Verification and Validation Summit 8 (2012) in Atlantic City, New Jersey on 10 October 2012.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:18am</span>
By Donald Firesmith Senior Member of the Technical StaffSoftware Solutions DivisionThe verification and validation of requirements are a critical part of systems and software engineering. The importance of verification and validation (especially testing) is a major reason that the traditional waterfall development cycle underwent a minor modification to create the V model that links early development activities to their corresponding later testing activities. This blog post introduces three variants on the V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method. The Traditional V Model Verification and validation are typically performed using one or more of the following four techniques: analysis—the use of established technical or mathematical models, simulations, algorithms, or scientific principles and procedures to determine whether a work product meets its requirements demonstration—the visual examination of the execution of a work product under specific scenarios to determine whether it meets its requirements inspection—the visual examination (possibly including physical manipulation or the use of simple mechanical or electrical measurement) of a non-executing work product to determine whether it meets its requirements testing—the stimulation of an executable work product with known inputs and preconditions followed by the comparison of its actual with required response (outputs and postconditions) to determine whether it meets its requirements The V model is a simple variant of the traditional waterfall model of system or software development. As illustrated in Figure 1, the V model builds on the waterfall model by emphasizing verification and validation. The V model takes the bottom half of the waterfall model and bends it upward into the form of a V, so that the activities on the right verify or validate the work products of the activity on the left. More specifically, the left side of the V represents the analysis activities that decompose the users’ needs into small, manageable pieces, while the right side of the V shows the corresponding synthesis activities that aggregate (and test) these pieces into a system that meets the users’ needs. Figure 1: Traditional Single V Model of System Engineering Activitie. To view a larger version of this model, please click on the image. Like the waterfall model, the V model has both advantages and disadvantages. On the positive side, it clearly represents the primary engineering activities in a logical flow that is easily understandable and balances development activities with their corresponding testing activities. On the other hand, the V model is a gross oversimplification in which these activities are illustrated as sequential phases rather than activities that typically occur incrementally, iteratively, and concurrently, especially on projects using evolutionary (agile) development approaches. Software developers can lessen the impact of this sequential phasing limitation if they view development as consisting of many short-duration V’s rather than a small number of large V’s, one for each concurrent iterative increment. When programmers apply a V model to the agile development of a large, complex system, however, they encounter some potential complications that require more than a simple collection of small V models including the following: The architecturally significant requirements and associated architecture must be engineered and stabilized as rapidly as is practical. All subsequent increments depend on the architecture, which becomes hard—and expensive—to modify after the initial increments have been based on it. Multiple, cross-functional agile teams will be working on different components and subsystems simultaneously, so their increments must be coordinated across teams to produce consistent, testable components and subsystems that can be integrated and released. Another problem with the V model is that the distinction between unit, integration, and system testing is not as clear cut as the model implies. For instance, a certain number of test cases can sometimes be viewed as both unit and integration tests, thereby avoiding redundant development of the associated test inputs, test outputs, test data, and test scripts. Nevertheless, the V model is still a useful way of thinking about development as long as everyone involved (especially management) remembers that it is merely a simplifying abstraction and not intended to be a complete and accurate model of modern system or software development. Many testers still use the traditional V model because they are not familiar with the following V models that are more appropriate for testing. V Models from the Tester’s Point of View While a useful if simplistic model of system or software development, the traditional V model does not adequately capture development from the tester’s point of view. This blog discusses three variations of the traditional V model of system/software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method. The single V model modifies the nodes of the traditional V model to represent the executable work products to be tested rather than the activities used to produce them. The double V model adds a second V to show the type of tests corresponding to each of these executable work products. The triple V model adds a third V to illustrate the importance of verifying the tests to determine whether they contain defects that could stop or delay testing or lead to false positive or false negative test results. As mentioned above, testing is a major verification technique intended to determine whether an executable work product behaves as expected or required when stimulated with known inputs. Testers test these work products by placing them into known pretest states (preconditions), stimulating them with appropriate inputs (data, messages, and exceptions), and comparing the actual results (postconditions and outputs) with the expected or required results to find faults and failures that can lead to underlying defects. Figure 2 shows the tester’s single V model, which is oriented around these work products rather than the activities that produce them. In this case, the left side of the V illustrates the analysis of ever more detailed executable models, whereas the right side illustrates the corresponding incremental and iterative synthesis of the actual system. Thus, this V model shows the executable work products that are tested rather than the general system engineering activities that generate them.   Figure 2: Tester’s Single V Model of Testable Work Products. To view a larger version of this model, please click on the image. The Tester’s Double V ModelTraditionally, only the right side of the V model dealt with testing. The requirements, architecture, and design work products on the left side of the model have been documents and informal diagrams that were best verified by such manual verification techniques as analysis, inspections, and reviews. With the advent of model-based development, the requirements, architecture, and design models became better defined by using more formally defined modeling languages, and it became possible to use automated tools that implement static analysis techniques to verify these models. More recently, further advances in modeling languages and associated tools have resulted in executable models that can actually be tested by stimulating the executable models with test inputs and comparing actual with expected behavior.Figure 3 shows the Tester’s double-V model, which adds the corresponding tests to the tester’s single V model. The double V model allows us to detect and fix defects in the work products on left side of the V before they can flow into the system and its components on the right side of the V. In the double V model, every executable work product should be tested. Testing need not—and in fact should not—be restricted to the implemented system and its parts. It is also important to test any executable requirements, architecture, and design so that the defects in the models are found and fixed before they can migrate to the actual system and its parts. This process typically involves testing an executable requirements, architecture, or design model (or possibly a prototype) that is implemented in a modeling language (often state-based) such as SpecTRM Requirements Language (SpecTRM-RL), Architecture Analysis and Design Language (AADL), and Program Design Language (PDL) is sufficiently formal to be executable using an appropriate associated tool simulates  the system under test Tests should be created and performed as the corresponding work products are created. In Figure 3, the short arrows with two arrowheads are used to show that (1) the executable work products can be developed first and used to drive the creation of the tests or (2) test driven development (TDD) can be used, in which case the tests are developed before the work product they test. The top row of the model uses testing to validate that the system meets the needs of its stakeholders (that is, that the correct system is built). Conversely, the bottom four rows of the model use testing to verify that the system is built correctly (that is, architecture conforms to requirements, design conforms to architecture, implementation conforms to design, and so on).In addition to the standard double V model, there are two variants that deserve mention. There is little reason to perform unit testing if model-driven development (MDD) is used, a trusted tool is used to automatically generate the units from the unit design, and unit design testing has been performed and passed. Similarly, there is little reason to perform separate unit design testing if the unit design has been incorporated into the unit using the programming language as a program design language (PDL) so that unit testing verifies both the unit’s design and implementation. Figure 3: Tester’s Double V Model of Testable Work Products and Corresponding Test. To view a larger version of this model, please click on the image. The Tester’s Triple V ModelThe final variant of the traditional V model, the triple V model, consists of three interwoven V models. The left V model shows the main executable work products that must be tested. The middle V model shows the types of tests that are used to verify and validate these work products. The right V model shows the verification of these testing work products in the middle V. The triple V model uses the term verification rather than tests because the tests are most often verified by analysis, inspection, and review.Figure 4 below documents the tester’s triple V model, in which additional verification activities have been added to determine whether the testing work products are sufficiently complete and correct that they will not produce numerous false-positive and false-negative results. Figure 4: The Tester’s Triple V Model of Work Products, Tests, and Test Verification. To view a larger version of this model, please click on the image. Conclusion As we have demonstrated above, relatively minor changes to the traditional V model make it far more useful to testers. Modifying the traditional V model to show executable work products instead of the associated development activities that produce them, emphasizes that these are the work products that testers will test. By associating each of these executable work products with its associated tests, the double V model makes it clear that testing does not have to wait until the right side of the V. Advances in the production of executable requirements, architectures, and designs enable testing to begin much earlier on the left side of the V so that requirements, architecture, and design defects can be found and fixed early before they can propagate into downstream work products. Finally, the triple V model makes it clear that it is not just the primary work products that must be verified. The tests themselves should be deliverables and must be verified to ensure that defects in the tests do not invalidate the test results by causing false-positive and false-negative test results. The V models have typically been used to describe the development of the system and its subsystems. The test environments or beds and test laboratories and facilities are also systems, however, and must be tested and otherwise verified. Thus, these test-oriented V models are applicable to them as well. This blog entry has been adapted from chapter one of my book Common System and Software Testing Pitfalls, which will be published this December by Addison Wesley as part of the SEI Series in Software Engineering.I would welcome your feedback on these suggested variations of the traditional V model in the comments section below. Additional Resources To read the SEI technical report, "Reliability Validation and Improvement Framework" by Peter Feiler, John Goodenough, Arie Gurfinkel, Charles Weinstock, and Lutz Wrage, please visithttp://www.sei.cmu.edu/reports/12sr013.pdf.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:15am</span>
To view a video of this blog post in its entirety, please click here. By Douglas C. SchmidtPrincipal Researcher To view a video of the introduction, please click here. The Better Buying Power 2.0 initiative is a concerted effort by the United States Department of Defense to achieve greater efficiencies in the development, sustainment, and recompetition of major defense acquisition programs through cost control, elimination of unproductive processes and bureaucracy, and promotion of open competition. This SEI blog posting describes how the Navy is operationalizing Better Buying Power in the context of their Open Systems Architecture and Business Innovation initiatives.  This posting also presents the results from a recent online war game that underscore the importance of automated testing in these initiatives to help avoid common traps and pitfalls of earlier cost containment measures. Overview of the Navy’s Open Systems Architecture InitiativeTo view a video of this section, please click here. Given the expense of our major defense acquisition programs—coupled with budget limitations stemming from the fiscally constrained environment—the Department of Defense (DoD) has made cost containment a top priority. In response, the Navy has devised and supported various Open Systems Architecture initiatives, such as the Future Airborne Capability Environment (FACE), which is a technical standard aimed at enhancing interoperability and software portability for avionics software applications across DoD aviation platforms.  The goals of these types of initiatives are to deliver enhanced integrated warfighting capability at lower cost across the enterprise and throughout the lifecycle via the use of modular, loosely coupled, and explicitly-articulated architectures that provide many shared and reusable capabilities to warfighter applications fully disclosing requirements, architecture, and design specifications and development work products to program performers adopting common components based on published open interfaces and standards achieving interoperability between hardware and/or software applications and services by applying common protocols and data models amortizing the effort needed to create conformance and regression test suites that help automate the continuous verification, validation, and optimization of functional and non-functional requirements Overview of the Navy’s Business Innovation Initiative To view a video of this section, please click here. Achieving the goals of Open Systems Architecture requires the Navy to formulate a strategy for decomposing large monolithic programs and technical designs into manageable, capability-oriented frameworks that can integrate innovation more rapidly and lower total ownership costs.  A key element of this strategy is the Navy’s Business Innovation Initiative, which is investigating various changes in business relationships between an acquisition organization and its contractor(s) to identify rational, actionable reform for new acquisition strategies, policies, and processes. These business relationship changes aim to open up competition, incentivize better contractor performance, increase access to innovative products and services from a wider array of sources, decrease time to field new capabilities, and achieve lower acquisition and lifecycle costs while sustaining fair industry profitability. Although there’s a clear and compelling need for new business and technical models for major defense acquisition programs, aligning the Naval acquisition community to the new Open System Architecture and Business Innovation initiatives presents a complex set of challenges and involves many stakeholders. To better understand these challenges, and to identify incentives that meet its future demands, the Navy ran two Massive Multiplayer Online Wargames Leveraging the Internet (MMOWGLI) in 2013. The Navy used these games to crowd-source ideas from contractors, government staff, and academics on ways to encourage industry and the acquisition workforce to use an Open Systems Architecture strategy. Overview of the Massive Multiplayer Online Wargame Leveraging the Internet (MMOWGLI)To view a video of this section, please click here. The MMOWGLI platform was developed by the Naval Post Graduate School in Monterey, California. This web-based platform supports thousands of distributed players who work together in a crowd-sourcing manner to encourage innovative thinking, generate problem solving ideas, and plan actions that realize those ideas. The first Navy Business Innovation Initiative MMOWGLI game was held in January 2013. The primary objective of the game was to validate the use of the MMOWGLI platform to gather innovative ideas for improving the business of Naval systems acquisition. This game was extremely successful, and generated 890 ideas and 11 action plans. In addition, the results validated the soundness of the overall Navy Open System Architecture strategy and illuminated many ideas for further exploration in subsequent events with broader audiences. A second Navy Business Innovation Initiative MMOWGLI was conducted from July 15 to August 1, 2013. The purpose of this game was to generate ideas from a wider audience of acquisition professionals on how to best incentivize industry, and how to motivate the government workforce to adopt OSA business models in the procurement, sustainment, and recompetition of national defense systems. The 1,750 ideas presented through this exercise were later validated and translated into 15 action plans for implementing the Navy’s Open System Architecture strategy.  More than half of the nearly 300 participants in the game were from industry, and many of these were from the small business community. Results from the Second MMOWGLI on the Navy’s Business Innovation InitiativeTo view a video of this section, please click here. Given the current fiscal climate in the DoD, it’s not surprising that many action plans in the second Business Innovation Initiative MMOWGLI war game dealt with cost-containment strategies. Below, I have listed several actions plans (followed by the goal of that action plan in italics) that came out of the second Business Innovation Initiative MMOWGLI war game: providing a bonus to Navy team members who save money on acquisition programs The goal is to incentivize program office teams to take both a short- and long-term view toward efficient acquisitions by optimizing prompt/early delivery of artifacts with accrued savings over the lifecycle. rewarding a company for saving money on an acquisition contract: top savers would be publicly recognized and rewarded The goal is to allow effective public image improvement for both government and industry partners of all sizes and types to receive tangible recognition of cost-saving innovations. increasing the incentive paid to a contractor if the actual cost of their delivered solution was less than the targeted cost The goal is to give industry a clear mechanism for reporting cost savings, a clear way to calculate the reward for cost savings, and a transparent method for inspecting actuals over time. Avoiding Common Traps and Pitfalls of Cost Containment via Automated TestingTo view a video of this section, please click here. Although cutting costs is an important goal, it’s critical to recognize that cost containment alone may be a hollow victory if it yields less costly— but lower quality—solutions that don’t meet their operational requirements and that can’t be sustained effectively and affordably over their lifecycles. It’s therefore essential to balance cost savings, on one hand, with ensuring stringent quality control on the other. What is needed are methods, techniques, tools, and processes that enable software and systems engineers, program managers, and other acquisition professionals to ensure that cost-cutting strategies don’t compromise the quality and sustainability of their integrated solutions. In particular, MMOWGLI action plans that identify reward structures need to be balanced with action plans that avoid situations where contractors—or even government acquisition professionals—game the system by cutting costs (to get a bonus), while ignoring key system quality attributes (such as dependability, maintainability, and security) to the detriment of both the end-users (warfighters, planners, operators, et al.) and the organizations responsible for the long-term sustainment of the systems. Ideally, contractors and the government should be incentivized to control costs, while still ensuring that a quality product is delivered in a manner that is both operationally capable and affordably sustainable over the program lifecycle. The $640 billion dollar question is "how can we help acquisition professionals and technologists better achieve the balance between quality and affordability?" The Business Innovation Initiative MMOWGLI participants collaborated to provide a key piece of the solution. In particular, some of the highest ranked action plans from the second MMOWGLI game addressed the need for automated testing and retesting as an evaluator and enforcer of system quality. Testing stimulates an executable component or work product with known inputs and under known preconditions followed by the comparison of its actual and expected outputs and post-conditions to determine whether its actual behavior is consistent with its required behavior. Automated testing is essential to achieve positive cost-saving results in OSA initiatives by ensuring that the components and products delivered at lower costs have the requisite quality as well as the ability to reduce the time and effort required to conduct the testing processes. MMOWGLI Action Plans for Automated TestingTo view a video of this section, please click here. The top rated action plan from the second MMOWGLI game proposed using automated tools and industry best-practices to reduce manual testing and evaluation effort and increase the coverage of automated regression testing in mission-critical Naval systems. When there are many frequent development blocks—as is the case with iterative and incremental development methods—it is necessary to perform regression testing on the previously developed software to verify that it continues to operate properly after being (1) integrated with the new software and (2) evolved as defects are fixed and improvements are made. Iterative and incremental development cycles greatly increase the need for regression testing, and this additional testing becomes infeasible when it is performed manually. Another testing-related action plan was also well received, being ranked 8th out of a total of 15 action plans. This action plan recommended reducing certification costs by requiring integrated validation and verification processes to involve automated testing, including assurance cases; test plans, procedures, reports, and scripts; as well as test data, tools, environments, and labs. The goal is to replace time-consuming manual testing methods with formalized automated testing across the lifecycle by defining, delivering, and maintaining testing work products with components to enable effective, efficient, and repeatable testing during component development, system integration, sustainment, and recompetition. Ironically, the action plans that focused on cost containment alone were ranked lower by the participants (10th, 12th, and 14th out of the total 15 action plans). Based on an analysis of comments, it didn’t appear that their low ranking stemmed from a lack of interest in controlling costs, but rather from the realization that without effective automation of testing and retesting, the benefits of cost savings and efficiencies from OSA initiatives may be compromised by inadequate quality. A Way Forward for Automated Testing in Open Systems Architecture Initiatives To view a video of this section, please click here. After analyzing the results of the two MMOWGLI war games carefully, the Navy recommended that a subsequent study area in the Open Systems Architecture and Business Innovation initiatives focus on affordable testing. The goal is to establish quality assurance processes that are efficient and comprehensive during the design phase and initial delivery, as well as throughout the sustainment and re-competition phases. It’s also critical that these automated tests be delivered with the system and include sufficient documentation so that government or sustainment contractors can both execute and maintain these tests. Of course, it’s also important to recognize the limitations of automated testing. There is a significant up-front cost in automating tests. Likewise, the resulting testing work products must be engineered so that they have high quality. Moreover, automation may not be appropriate—or even feasible—for every type of testing, such as usability tests and tiger teams performing penetration testing. The take-home point from our experience with both Business Innovation Initiative MMOWGLI games is that by combining effective testing with other action plans, the benefits of cost savings and efficiencies from Open Systems Architecture initiatives may be achieved without compromising the quality of the results. We don’t just want competition; we don’t just want lower cost. Instead, we need to use competition to get the same or better quality at a cost we can afford. Our next step is to help the Department of the Navy formulate a comprehensive whole lifecycle testing and quality assurance approach—together with a path towards standardization of automated testing and retesting methods and tools—to assist with lowering the cost and time to field innovative and robust capability.  Our goal at the SEI is also to help promote the visibility and strategic importance of testing across government and industry to enhance the quality of delivered components and integrated products, as well as to spur innovation in quality assurance methods and tools by the defense industrial base and commercial information technology companies. Additional Resources Unfortunately, we see both program offices and defense contractors making the same mistakes repeatedly when it comes to testing. My colleague, Donald Firesmith, has collected and analyzed 92 commonly occurring testing pitfalls, organized them into 14 categories, and published them in the recent book Common System and Software Testing Pitfalls. Since the book was finalized, he has identified 15 additional testing pitfalls as well as a new category of pitfalls that can be viewed on his website at: http://donald.firesmith.net/home/common-testing-pitfalls. For more information on how the Navy is crowd-sourcing ideas via the MMOWGLI platform to promote innovative acquisition strategies please seehttps://portal.mmowgli.nps.edu/bii-blog/-/blogs/inside-the-navy-bii-game-review
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:13am</span>
By Carol Woody Technical Manager Cybersecurity Engineering  This blog post was co-authored by Robert Ellison. The Wireless Emergency Alerts (WEA) service went online in April 2012, giving emergency management agencies such as the National Weather Service or a city’s hazardous materials team  a way to send messages to mobile phone users located in a geographic area in the event of an emergency. Since the launch of the WEA service, the newest addition to the Federal Emergency Management Agency (FEMA) Integrated Public Alert and Warning System (IPAWS),"trust" has emerged as a key issue for all involved. Alert originators at emergency management agencies must trust WEA to deliver alerts to the public in an accurate and timely manner. The public must also trust the WEA service before it will act on the alerts. Managing trust in WEA is a responsibility shared among many stakeholders who are engaged with WEA. This blog post, the first in a series, highlights recent research aimed at enhancing both the trust of alert originators in the WEA service and the public’s trust in the alerts it receives.  The types of messages that the WEA service can issue to the public on their mobile phones include extreme weather, and other threatening emergencies AMBER Alerts Presidential Alerts during a national emergency While all of the major cell phone carriers are distributing WEA messages, the service is restricted to newer cell phones which are WEA-capable (see http://www.ctia.org/your-wireless-life/consumer-tips/wireless-emergency-alerts for more specific device details). Messages are distributed using a very low bandwidth distribution so that even if internet bandwidth is disrupted during an emergency, messages can still be distributed and received.  Establishing a Trust Model Trust is the result of many positive and negative influencing factors and it is important to examine each of these to determine which factors are the most critical. As part of our research, we interviewed many public alerting experts, seeking information about successful strategies for establishing trust.  From the interviews we built a series of scenarios. Using those scenarios, we conducted surveys to assemble the range of positive and negative reactions to different scenarios. We then assembled these reactions into a Bayesian Belief Network. Of the approximately 80 factors identified from the interviews, we isolated the ones that were important to pay attention to - the ones with the greatest influence on trust. There were many conflicting factors, so we had to consider how these factors influenced each other, as well. For example, while the speed at which an alert is issued was identified as an important factor, members of the public are less likely to trust the alert if it contains misspellings and misplaced words. Care and review in crafting the message content must also be considered.  Understanding and accounting for trade-offs became an important facet of our work. If alert originators don’t fully understand the inherent conflicts between factors they want to maximize, they might make counterproductive decisions.  As we outlined in our technical report on this topic, Maximizing Trust in the Wireless Emergency Alerts Service, there are issues affecting trust in the WEA Service for both alert originators and the public receiving the alerts.  Alert Originator Issues  Alert originators are federal, state, territorial, tribal, and local authorities approved by FEMA to issue critical public alerts and warnings. The sources of emergency alerts include police and fire protection groups, the National Weather Service, and the National Center for Missing and Exploited Children.   Since FEMA launched the WEA service, alert originators have been evaluating wireless alert distribution to determine if they will use this capability and how they would go about acquiring needed resources to integrate it with alerting capabilities they are already using (e.g. public radio and television broadcasting, highway signage). In a sense, the alert originators were struggling with issues typically seen when a software system expands.  Based on our analysis, the WEA service requires maximizing three key outcomes:  Appropriateness - the suitability of WEA as an alerting solution within the context of a particular incident.  Availability - the ability of alert originators to use the WEA service when needed.  Effectiveness - the ability of the WEA service to produce outcomes desired by alert originators. Using the alert originator’s trust model, we identified several key factors that influence each of these three outcomes, including the following:  Security and urgency. The WEA service is intended for use only in the most serious emergency events. Our trust model confirmed that the urgency of the incident must be classified as either immediate or expected, requiring action immediately or within the hour. Since these types of alerts are infrequent, emergency management agencies should have clear approval and usage procedures in place to ensure appropriateness for this distribution channel. In some cases, alert originators may access the WEA service through integrated alerting software that issues notifications simultaneously through WEA and other channels requiring further coordination of use.  Certainty. Alerts issued using the WEA service need to be verifiable. The WEA service is intended for use only in incidents with a high degree of certainty. The certainty of the incident must either have been observed (determined to have occurred or be ongoing) or likely (the probability of occurrence is greater than 50 percent).  Alert originators must receive information from their sources with sufficient timeliness to make use of WEA.  Geographic breadth. Alerts issued using the WEA service need to be targeted to the size and location of the geographic region impacted by the emergency event. Current usage is limited to county designations, which are effective in some but not all cases, In some states, counties are huge and notifications of an emergency in one area of a county may be hundreds of miles away from many recipients. Conversely, in major metropolitan areas where the distance is smaller but population density is much higher, current WEA geographic granularity may also result in many people receiving alerts for an event that is not relevant to them. As a result, recipients may become desensitized to the alerting process increasing the likelihood that they could ignore an alert that is critical for them.  Accessibility. System accessibility for the alert originator is a critical factor in securing the trust of alert originators. Factors that influence accessibility include security decisions which make the WEA Service accessible from only a few dedicated terminals within the alert originator’s office. WEA messages are not an everyday occurrence and familiarity of the operation of these terminals will be improved if users have greater access. We found that accessibility improves if alert originators can access the WEA service through integration with other alerting and emergency management applications they use more frequently.  Several alert originators we spoke with expressed a desire for remote capabilities (e.g., issuing an alert from the scene of an incident). Although we are unaware of any software that supports this type of remote access, it is a feature that warrants investigation by suppliers of alerting software. Remote access to capabilities can provide opportunities to an attacker as well as a legitimate user and effective security will be important to preserving trust.  Securing Public Trust in the WEA Service Ultimately, an alert originator’s message can be measured on whether or not the public takes the recommended action, and this will only occur if the recipient trusts the sender. We analyzed public trust in the WEA Service by considering factors that could affect response from a recipient to a WEA alert. These included reading or listening to an alert understanding an alert believing an alert is credible  acting appropriately in response to an alert Our analysis showed that the message has to be well written so that it clearly identifies the individuals affected, the reason for action, and a recommended response. Analysis of our model for public trust identified several factors that influence each of these outcomes including the following:  Clarity. Recipient feedback suggested that poor grammar and spelling can lead a recipient to treat an alert as spam and ignore the suggested action.  Explanation.  While alert originators have control over content, the message cannot exceed 90 characters based on the current WEA service design. A statement about where to find additional information would increase recipient trust. High-security events with short lead times could require multiple alerts to provide the necessary information.  Timing.  Public trust is also affected by the timing of messages. Our model indicated that additional lead time on an alert that provided more time for the recipient to respond significantly increased trust.  Frequency. As expected, too many alerts that are not applicable to the recipient will reduce recipient trust. We also found that a lack of coordination among local jurisdictions can increase the frequency of alerts, lead to confusion and misinformation, and raise credibility concerns for all involved.  Within any jurisdiction, multiple agencies may all have authority to issue an alert. To avoid confusion, a clear hierarchy must be established. This understanding is best established through interagency agreements that define alerting responsibilities and regular, frequent communication among agencies. Alert originators must also establish processes and communications channels with neighboring jurisdictions to notify them when an alert is being issued so that they may also prepare, for example, by handling calls to the 911 call center. More in-depth information on this topic may be found in the WEA Governance Guide in the report Best Practices in Wireless Emergency Alerts.  Finally, our work on this project found that alert originators and public recipients are more likely to trust the WEA service if they can verify the alerts through another channel, such as Twitter. For example, Twitter provided critical information about the northeastern weather emergencies in the fall of 2012 (Hurricane Sandy). In addition, social media outlets may be able to provide insight about public reaction to an alert and enable alert originators to monitor response, tailor follow-up messages and make adjustments to future alerting strategies.  Recommendations and Future Work  As a result of our research on issues that affect trust in the WEA service, we were able to develop recommendations for alert originators to increase their trust of WEA and the public’s trust in the alerts they receive. Recommendations that improve the trust alert originators have in WEA include the following: Procure or create your WEA system to maximize the accessibility of the system when and where it is needed.  Periodically verify the performance of your system to ensure reliable operation when the system is needed.  Integrate WEA system alerting with other emergency management agency operations to maximize operator familiarity with system operation.  Consider the time required to generate and issue an alert when acquiring your system.  Establish a means to ensure the accuracy of the alert messages that will be issued.  Collect and analyze feedback from prior alerts to monitor and improve effectiveness.  Recommendations that increase the public’s trust in the alerts they receive include the following: Use WEA only for events of high urgency, severity, or certainty.  When deciding to issue a WEA alert, consider the geographic footprint of the event relative to the footprint of the alert.  Ensure clarity of message, spelling, and grammar. Include an explanation of why a specific action should be taken.  Clearly define the action to be taken.  Issue the alert in the primary language of the intended recipients.  Avoid issuing too many alerts that are not applicable to recipients.  Establish alert coordination across multiple (overlapping and/or adjacent) jurisdictions to avoid duplicate alerts. Avoid issuing bogus alerts following a security compromise of a WEA site.  Another phase of our research involves the development of security guidance for alert originators and their use of WEA services.  This involves taking a deeper dive into security issues surrounding the WEA service.  Our aim is to ensure that trust is built into the WEA capability.  We can help alert originators understand their security risks so that they may make the right system implementation and integration choices. We welcome your feedback on our work. Please leave feedback in the comments section below.  Additional Resources  To read the SEI technical report, Maximizing Trust in the Wireless Emergency Alerts (WEA) Service, please visithttp://resources.sei.cmu.edu/library/asset-view.cfm?assetID=70004. To read the SEI technical report, Wireless Emergency Alerts (WEA) Cybersecurity Risk Management Strategy for Alert Originators, please visithttp://resources.sei.cmu.edu/library/asset-view.cfm?assetID=70071.  To download the report, Best Practices in Wireless Emergency Alerts, please visithttp://www.firstresponder.gov/TechnologyDocuments/Wireless%20Emergency%20Alerts%20Best%20Practices.pdf.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:09am</span>
By Douglas C. Schmidt Principal ResearcherIn the first half of this year, the SEI blog has experienced unprecedented growth, with visitors in record numbers learning more about our work in big data, secure coding for Android, malware analysis, Heartbleed, and V Models for Testing. In the first six months of 2014 (through June 20), the SEI blog has logged 60,240 visits, which is nearly comparable with the entire 2013 yearly total of 66,757 visits. As we reach the mid-year point, this blog posting takes a look back at our most popular areas of work (at least according to you, our readers) and highlights our most popular blog posts for the first half of 2014, as well as links to additional related resources that readers might find of interest.  Big Data New data sources, ranging from diverse business transactions to social media, high-resolution sensors, and the Internet of Things, are creating a digital tsunami of big data that must be captured, processed, integrated, analyzed, and archived. Big data systems that store and analyze petabytes of data are becoming increasingly common in many application domains. These systems represent major, long-term investments, requiring considerable financial commitments and massive scale software and system deployments. With analysts estimating data storage growth at 30 to 60 percent per year, organizations must develop a long-term strategy to address the challenge of managing projects that analyze exponentially growing data sets with predictable, linear costs. In a popular series on the SEI blog, researcher Ian Gorton continues to describe the software engineering challenges of big data systems. In the first post in the series, Addressing the Software Engineering Challenges of Big Data, Gorton describes a risk reduction approach called Lightweight Evaluation and Architecture Prototyping (for Big Data) that he developed with fellow researchers at the SEI. The approach is based on principles drawn from proven architecture and technology analysis and evaluation techniques to help the Department of Defense (DoD) and other enterprises develop and evolve systems to manage big data. In the second post in the series, The Importance of Software Architecture in Big Data Systems, Gorton explores how the nature of building highly scalable, long-lived big data applications influences iterative and incremental design approaches. In the third post in the series, Four Principles of Engineering Scalable, Big Data Software Systems, Gorton describes principles that hold for any scalable, big data system. The fourth post in the series describes how to address one of these challenges, namely, you can’t manage what you don’t monitor. Readers interested in finding out more about Gorton’s research in big data can view the following additional resources: Webinar: Software Architecture for Big Data Systems Podcast: An Approach to Managing the Software Engineering Challenges of Big Data Secure Coding for the Android Platform  One of the most popular areas of research among SEI blog readers so far this year has been the series of posts highlighting our work on secure coding for the Android platform. Android is an important area to focus on, given its mobile device market dominance (82 percent of worldwide market share in the third quarter of 2013), the adoption of Android by the Department of Defense, and the emergence of popular massive open online courses on Android programming and security. Since its publication in late April, the post Two Secure Coding Tools for Analyzing Android Apps, by Will Klieber and Lori Flynn, has been among the most popular on our site. The post highlights a tool they developed, DidFail, that addresses a problem often seen in information flow analysis: the leakage of sensitive information from a sensitive source to a restricted sink (taint flow). Previous static analyzers for Android taint flow did not combine precise analysis within components with analysis of communication between Android components (intent flows). CERT’s new tool analyzes taint flow for sets of Android apps, not only single apps. DidFail is  available to the public as a free download. Also available is a small test suite of apps that demonstrates the functionality that DidFail provides.  The second tool, which was developed for a limited audience and is not yet publicly available, addresses activity hijacking attacks, which occur when a malicious app receives a message (an intent) that was intended for another app, but not explicitly designated for it. The post by Klieber and Flynn is the latest in a series detailing the CERT Secure Coding team’s work on techniques and tools for analyzing code for mobile computing platforms.  In April, Flynn also authored a post, Secure Coding for the Android Platform, that highlights secure coding rules and guidelines specific to the use of Java in the Android platform. Although the CERT Secure Coding has developed secure coding rules and guidelines for Java, prior to 2013 the team had not developed a set of secure coding rules that were specific to Java’s application in the Android platform. Flynn’s post discusses our initial set of Android rules and guidelines, which include mapping our existing Java secure coding rules and guidelines to Android and creating new Android-specific rules for Java secure coding. Readers interested in finding out more about the CERT Secure Coding Team’s work in secure coding for the Android platform can view the following additional resources:  Paper: Android Taint Flow Analysis for App Sets (SOAP 2014 workshop) Presentation: Android Taint Flow Analysis for App Sets Thesis: Precise Static Analysis of Taint Flow for Android Application Sets CERT Secure Coding Rules and Guidelines: CERT Secure Coding Rules and Guidelines for Android wiki For more than 10 years, the CERT Secure Coding Initiative at the SEI has been working to develop guidance—most recently The CERT C Secure Coding Standard: Second Edition—for developers and programmers through the development of coding standards by security researchers, language experts, and software developers using a wiki-based community process. In a post published in early May, CERT Secure Coding technical manager, Robert Seacord, explored the importance of a well-documented and enforceable coding standard in helping programmers circumvent pitfalls and avoid vulnerabilities like Heartbleed.  Readers interested in finding out more about the CERT Secure Coding Team’s work on the C Coding Standard can view the following additional resources:  Book: The CERT C Coding Standard, Second Edition: 98 Rules for Developing Safe, Reliable, and Secure Systems Newsletter: To subscribe to our Secure Coding eNewsletter, please click here.  CERT Secure Coding Rules and Guidelines: CERT C Coding Standard wiki (To sign up for a free account on the CERT Secure Coding wiki, please visit http://www.securecoding.cert.org.) Heartbleed  The Heartbleed bug, a serious vulnerability in the Open SSL crytopgraphic software library, enables attackers to steal information that, under normal conditions, is protected by the Secure Socket Layer/Transport Layer Security (SSL/TLS) encryption used to secure the Internet. Heartbleed left many questions in its wake:   Would the vulnerability have been detected by static analysis tools?  If the vulnerability has been in the wild for two years, why did it take so long to bring this to public knowledge now?  Who is ultimately responsible for open-source code reviews and testing?  Is there anything we can do to work around Heartbleed to provide security for banking and email web browser applications?  In April 2014, researchers from the SEI and Codenomicon, one of the cybersecurity organizations that discovered the Heartbleed vulnerability, participated in a panel to discuss Heartbleed and strategies for preventing future vulnerabilities. During the panel discussion, researchers ran out of time to address all of the questions asked by the audience, so they transcribed the questions and panel members wrote responses. We published the questions and responses as a blog post that was among our most popular posts in the last six months.   Readers interested in finding out more about Heartbleed can view the following additional resources:  Webinar: A Discussion on Heartbleed: Analysis, Thoughts, and Actions Vulnerability Note: CERT researchers created a vulnerability note about the Heartbleed bug that records information about affected vendors as well as other useful information. Automated Testing in Open Systems Architecture Initiatives In March, we published our first SEI video blog with my post, The Importance of Automated Testing in Open Systems Architecture Initiatives, which was also well received by our readers. In the post, I described how the Navy is operationalizing Better Buying Power in the context of their Open Systems Architecture and Business Innovation initiatives. Given the expense of our major defense acquisition programs—coupled with budget limitations stemming from the fiscally constrained environment—the United States Department of Defense (DoD) has made cost containment a top priority. The Better Buying Power 2.0 initiative is a concerted effort by the DoD to achieve greater efficiencies in the development, sustainment, and recompetition of major defense acquisition programs through cost control, elimination of unproductive processes and bureaucracy, and promotion of open competition.  In the post, I also presented the results from a recent online war game that underscore the importance of automated testing in these initiatives to help avoid common traps and pitfalls of earlier cost containment measures. The Massive Multiplayer Online Wargame Leveraging the Internet (MMOWGLI) platform used for this online war game was developed by the Naval Post Graduate School in Monterey, California. This web-based platform supports thousands of distributed players who work together in a crowd-sourcing manner to encourage innovative thinking, generate problem solving ideas, and plan actions that realize those ideas.  Given the current fiscal climate in the DoD, it’s not surprising that many action plans in the second Business Innovation Initiative MMOWGLI war game dealt with cost-containment strategies. In the post, I listed several actions plans (followed by the goal of that action plan in italics) that came out of the second Business Innovation Initiative MMOWGLI war game: providing a bonus to Navy team members who save money on acquisition programs. The goal is to incentivize program office teams to take both a short- and long-term view toward efficient acquisitions by optimizing prompt/early delivery of artifacts with accrued savings over the lifecycle.  rewarding a company for saving money on an acquisition contract: top savers would be publicly recognized and rewarded. The goal is to allow effective public image improvement for both government and industry partners of all sizes and types to receive tangible recognition of cost-saving innovations.  increasing the incentive paid to a contractor if the actual cost of its delivered solution was less than the targeted cost. The goal is to give industry a clear mechanism for reporting cost savings, a clear way to calculate the reward for cost savings, and a transparent method for inspecting actuals over time.  Readers interested in finding out more about other work in this field, can view the following resources: Video Blog: The Importance of Automated Testing in Open Systems Architecture Initiatives Paper: Experiences Using Online War Games to Improve the Business of Naval Systems Acquisition  Three Variations on the V Model of Testing Don Firesmith’s post, Using V Models for Testing, which was published in November, remains one of the most popular posts on our site throughout the first half of this year. It introduces three variants on the traditional V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method. The V model builds on the traditional waterfall model of system or software development by emphasizing verification and validation. The V model takes the bottom half of the waterfall model and bends it upward into the form of a V, so that the activities on the right verify or validate the work products of the activity on the left. More specifically, the left side of the V represents the analysis activities that decompose the users’ needs into small, manageable pieces, while the right side of the V shows the corresponding synthesis activities that aggregate (and test) these pieces into a system that meets the users’ needs. The single V model modifies the nodes of the traditional V model to represent the executable work products to be tested rather than the activities used to produce them. The double V model adds a second V to show the type of tests corresponding to each of these executable work products. The triple V model adds a third V to illustrate the importance of verifying the tests to determine whether they contain defects that could stop or delay testing or lead to false positive or false negative test results. In the triple-V model, it is not required or even advisable to wait until the right side of the V to perform testing. Unlike the traditional model, where tests may be developed but not executed until the code exists (i.e., the right side of the V), with executable requirements and architecture models, tests can now be executed on the left side of the V. Readers interested in finding out more about Firesmith’s work in this field, can view the following resources: Book: Common System and Software Testing Pitfalls Podcast: Three Variations on the V Model for System and Software Testing DevOps  With the post An Introduction to DevOps, C. Aaron Cois kicked off a series exploring various facets of DevOps from an internal perspective and his own experiences as a software engineering team lead and through the lens of the impact of DevOps on the software community at large.  Here’s an excerpt from his initial post:  At Flickr, the video- and photo-sharing website, the live software platform is updated at least 10 times a day. Flickr accomplishes this through an automated testing cycle that includes comprehensive unit testing and integration testing at all levels of the software stack in a realistic staging environment. If the code passes, it is then tagged, released, built, and pushed into production. This type of lean organization, where software is delivered on a continuous basis, is exactly what the agile founders envisioned when crafting their manifesto: a nimble, stream-lined process for developing and deploying software into the hands of users while continuously integrating feedback and new requirements. A key to Flickr’s prolific deployment is DevOps, a software development concept that literally and figuratively blends development and operations staff and tools in response to the increasing need for interoperability. Earlier this month, Cois continued the series with the post A Generalized Model for Automated DevOps. In that post, Cois presents a generalized model for automated DevOps and describes the significant potential advantages for a modern software development team.    Readers interested in learning more about DevOps, should listen to the following resource:  Podcast: DevOps - Transform Development and Operations for Fast, Secure Deployments  Malware Analysis  Every day, analysts at major anti-virus companies and research organizations are inundated with new malware samples. From Flame to lesser-known strains, figures indicate that the number of malware samples released each day continues to rise. In 2011, malware authors unleashed approximately 70,000 new strains per day, according to figures reported by Eugene Kaspersky. The following year, McAfee reported that 100,000 new strains of malware were unleashed each day. An article published in the October 2013 issue of IEEE Spectrum, updated that figure to approximately 150,000 new malware strains. Not enough manpower exists to manually address the sheer volume of new malware samples that arrive daily in analysts’ queues.  CERT researcher Jose Morales sought to develop an approach that would allow analysts to identify and focus first on the most destructive binary files. In his blog post A New Approach to Prioritizing Malware Analysis, Morales describes the results of research he conducted with fellow researchers at the SEI and CMU’s Robotics Institute highlighting an analysis that demonstrates the validity (with 98 percent accuracy) of an approach that helps analysts distinguish between the malicious and benign nature of a binary file. This blog post is a follow up to his 2013 post Prioritizing Malware Analysis that describes the approach, which is based on the file’s execution behavior. Readers interested in learning more about prioritizing malware analysis, should listen to the following resource:  Podcast: Characterizing and Prioritizing Malicious Code  Looking Ahead  In the coming months, we will be continuing our series on DevOps, and are also creating posts that will explore metrics of code quality, as well as contextual computing, and many other topics.  Thank you for your support. We publish a new post on the SEI blog every Monday morning. Let us know if there is any topic you would like to see covered in the SEI Blog. We welcome your feedback in the comments section below.   
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:07am</span>
By Douglas C. SchmidtPrincipal Researcher As part of an ongoing effort to keep you informed about our latest work, I would like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in assuring software reliability, future architectures, Agile software teams, insider threat, and HTML5. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.  Assuring Software ReliabilityBy Robert J. Ellison The 2005 Department of Defense Guide for Achieving Reliability, Availability, and Maintainability (RAM) recommended an emphasis on engineering analysis with formal design reviews with less reliance on RAM predictions. A number of studies have shown the limitations of current system development practices for meeting these recommendations. This document describes ways that the analysis of the potential impact of software failures (regardless of cause) can be incorporated into development and acquisition practices through the use of software assurance.Download the PDF Patterns and Practices for Future ArchitecturesBy Eric Werner, Scott McMillan, & Jonathan Chu Graph algorithms are widely used in Department of Defense (DoD) applications including intelligence analysis, autonomous systems, cyber intelligence and security, and logistics optimization. These analytics must execute at larger scales and higher rates to accommodate the growing velocity, volume, and variety of data sources. The implementations of these algorithms that achieve the highest levels of performance are complex and intimately tied to the underlying architecture. New and emerging computing architectures require new and different implementations of these well-known graph algorithms, yet it is increasingly expensive and difficult for developers to implement algorithms that fully leverage their capabilities. This project investigates approaches that will make high-performance graph analytics on new and emerging architectures more accessible to users. The project is researching the best practices, patterns, and abstractions that will enable the development of a software graph library that separates the concerns of expressing graph algorithms from the details of the underlying computing architectures. The approach started with a fundamental graph analytics function: the breadth-first search (BFS). This technical note compares different BFS algorithms for central and graphics processing units, examining the abstractions used and comparing the complexity of the implementations against the performance achieved.Download the PDF Agile Software Teams: How They Engage with Systems Engineering on DoD Acquisition ProgramsBy Eileen Wrubel, Suzanne Miller, Mary Ann Lapham, & Timothy A. Chick This technical note, part of an ongoing series on Agile in the Department of Defense (DoD), addresses key issues that occur when Agile software teams engage with systems engineering functions in the development and acquisition of software-reliant systems. Published acquisition guidance still largely focuses on a system perspective, and fundamental differences exist between systems engineering and software engineering approaches. Those differences are compounded when Agile becomes a part of the mix, rather than adhering to more traditional waterfall-based development lifecycles.  For this technical note, the SEI gathered more data from users of Agile methods in the DoD and delved deeper into the existing body of knowledge about Agile and systems engineering before addressing them. Topics considered here include various interaction models for integrating systems engineering functions with Agile engineering teams, automation, insight and oversight, training, the role of Agile advocates/sponsors and coaches, the use of pilot programs, stakeholder involvement, requirements evolution, verification and validation activities, and the means by which Agile teams align their increments with program milestones. This technical note offers insight into how systems engineers and Agile software engineers can better collaborate when taking advantage of Agile as they deliver incremental mission capability.Download PDF Unintentional Insider Threats: A Review of Phishing and Malware Incidents by Economic SectorBy the CERT Insider Threat Team The research documented in this report seeks to advance the understanding of the unintentional insider threat (UIT) that results from phishing and other social engineering cases, specifically those involving malicious software (malware). The research team collected and analyzed publicly reported phishing cases involving malware and performed an initial analysis of the industry sectors impacted by this type of incident.  This report provides that analysis as well as case examples and potential recommendations for mitigating UITs stemming from phishing and other social engineering incidents. The report also compares security offices’ current practice of UIT monitoring to the current manufacturing and healthcare industries’ practice of tracking near misses of adverse events.Download the PDF Evaluation of the Applicability of HTML5 for Mobile Applications in Resource-Constrained Edge EnvironmentsBy Bryan Yan (Carnegie Mellon University - Institute for Software Research) and Grace Lewis Mobile applications increasingly are being used by first responders and soldiers to support their missions. These users operate in resource-constrained, edge environments characterized by dynamic context, limited computing resources, intermittent network connectivity, and high levels of stress. In addition to efficient battery management, mobile applications operating in edge environments require efficient resource usage of onboard sensors to capture, store, and send data across networks that may be intermittent. The traditional method for building mobile applications is to use native software development kits (SDKs) on a particular mobile platform, such as Android or iOS. However, HTML5 has recently evolved to a stage where it supports many of the development features that native SDKs support.  The advantages of using HTML5 include not only cross-platform development and deployment, but also that mobile edge applications would not have to be deployed on mobile devices, potentially leading to an easier distribution and testing process because they simply run inside the web browser that already exists on the device. This technical note presents an analysis of the feasibility of using HTML5 for developing mobile edge applications, as well as the use of bridging frameworks for filling in gaps in HTML5 development features.  This note also provides a discussion of the software architecture implications of HTML5 mobile application development. The work presented in this note is the result of an independent study in Carnegie Mellon University’s Master of Information Technology - Embedded Software Engineering (MSIT-ESE) program.Download the PDF Additional Resources  For the latest SEI technical reports and notes, please visit http://resources.sei.cmu.edu/library/.   
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Sep 10, 2015 09:05am</span>
Displaying 12081 - 12090 of 43689 total records
No Resources were found.