Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

  We have heard it before.  The proposed regulatory changes to the white collar exemption are  "imminent."  And, then they were delayed. Well, the regulations were sent by the  DOL to the OMB.  The conventional wisdom is that they will be published on June 18, 2015 (I suspect so the DOL can say "Spring"). We know the purpose and effect of the proposed regulations will be to increase the number of individuals who are non-exempt.  At a minimum, exempt status will carry a heavier price tag. The federal minimum weekly salary is going up—the only question is how high.  My prediction: ...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:14pm</span>
Everyone who drives a car understands the importance of a dashboard. How fast are you going? How much gas do you have left? Are there any warning lights flashing? An executive dashboard can give you the same kind of information in real time for your organization and its health. Below are the key characteristics of an executive dashboard: Uses visual indicators as a primary mode of providing information Connected with databases that provide near real-time information An executive dashboard runs on your computer, uses graphs and maps as a primary display device and is connected to databases which are updated regularly so you aren’t looking at old information. Just like car dashboards, executive dashboards can vary in appearance. There are many industry standard frameworks to implement dashboards.  Examples of these include: Balanced Scorecard, Six Sigma, SCOR.  In the subsequent posts, these frameworks will be discussed in greater detail.
Netwoven   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:14pm</span>
By Kevin Fall Deputy Director, Research, and CTOSEI Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management.  In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget. In an effort to offer our assessment of recommended techniques in these areas, SEI built upon an existing collaborative online environment known as SPRUCE (Systems and Software Producibility Collaboration Environment), hosted on the Cyber Security & Information Systems Information Analysis Center (CSIAC) website. From June 2013 to June 2014, the SEI assembled guidance on a variety of topics based on relevance, maturity of the practices described, and the timeliness with respect to current events.  For example, shortly after the Target security breach of late 2013, we selected Managing Operational Resilience as a topic. Ultimately, SEI curated recommended practices on five software topics: Agile at Scale, Safety-Critical Systems, Monitoring Software-Intensive System Acquisition Programs, Managing Intellectual Property in the Acquisition of Software-Intensive Systems, and Managing Operational Resilience. In addition to a recently published paper on SEI efforts and individual posts on the SPRUCE site, these recommended practices will be published in a series of posts on the SEI blog.  This following post, Managing Operational Resilience by Julia H. Allen, Pamela Curtis, and Nader Mehravari, presents challenges for managing operational resilience (in this post) and recommended practices for helping organizations manage operational resilience (in the second post in this series). Managing Operational Resilience - SPRUCE/SEIhttps://www.csiac.org/spruce/resources/ref_documents/recommended-practices-managing-operational-resilience A search at your favorite news aggregator for keywords such as "malware," "computer virus," or "data breach" will return tens of thousands of results. For most organizations it’s not a question of if a cyber attack will occur, but when. When an attack happens, the tempo of response must be fast, so an organization must already have practices in place covering how to respond. These practices should reflect a strategic approach that balances actions that protect assets—such as customer data and intellectual property—with actions that sustain services and operations. A recommended approach to address both protection and sustainment is the application of resilience management practices. Operational resilience is the ability of an entity to prevent disruptions to its mission from occurring, continue to meet its mission if a disruption or incident does occur, and return to normalcy when the disruption is eliminated. The concept of operational resilience applies to entities such as organizations, systems, networks, supply chains, critical infrastructure, cyberspace, Armed Forces, and even nations. Operational resilience management includes all the practices of planning, integrating, executing, and governing activities to ensure that an entity can identify and mitigate operational risks that could lead to service disruptions before they occur prepare for and respond to disruptive events (realized risks) in a manner that demonstrates command and control of incident response and service continuity recover and restore mission-critical services and operations following an incident within acceptable time frames Operational resilience management draws from several complex and evolving disciplines, including risk management, business continuity, disaster recovery, information security, incident and emergency management, information technology (IT), service delivery, workforce management, and supply-chain management, each with its own terminology, principles, and solutions. The practices described here reflect the convergence of these distinct, often siloed disciplines. As resilience management becomes an increasingly relevant and critical attribute of their missions, organizations should strive for a deeper coordination and integration of its constituent activities. Our discussion of operational resilience management as presented in this post has three parts. First, we set the context by providing an answer to the question "Why is operational resilience management challenging?" The next post in this series will present a set of recommended practices for operational resilience management follows. Our original SPRUCE post concludes with an extensive list of selected resources to help you learn more about operational resilience management and added links to various sources to help amplify some points. Every organization is different; judgment is required to implement these practices in a way that benefits your organization. In particular, be mindful of your mission, goals, existing processes, and culture. All practices have limitations. Some of these practices will be more relevant to your situation than others, and their applicability will depend on the context in which you apply them. To gain the most benefit, you need to evaluate each practice for its appropriateness and decide how to adapt it, striving for an implementation in which the practices meet your business objectives. Also, consider additional collections of recommended practices, including those among the various sources at the bottom of the webpage. Monitor your adoption and use of these practices, and adjust as appropriate. These practices are certainly not complete—they are a work in progress. Why is Managing Operational Resilience Challenging? Over the past 10 years, organizations have invested a tremendous amount of resources in cybersecurity. Nevertheless, regardless of how much has been spent on protection, cyber attackers continue to penetrate systems. We have reached a point in the battle for information and cybersecurity where we should change the focus of security investment from a narrow focus on planning how to avoid cyber attacks to a more balanced focus on avoidance and planning how to recover from cyber attacks. Operational resilience management has two sides—protect and sustain—and both are equally important. An organization must learn about the threat environment, maintain situational awareness of the context in which it operates, and create a risk-management plan that is as thorough and reliable as possible. But when an attack occurs, can the organization sustain its critical services and operations? Can it adequately recover its systems and get them back online as quickly as possible? Can it restore and recover service within a prescribed recovery time and according to its recovery-point objectives? An organization must ask, where can we not afford to have something bad happen, and where can we afford to have something bad happen and bounce back as quickly as we can? The need for organizations to achieve a balance between protect and sustain is why operational resilience management is so important. Operational resilience management is challenging for several reasons:1. Making a long-term commitment: Operational resilience is an emergent property. An emergent property is not something an organization can buy and put in place or assemble by buying its parts. For a property to emerge within an organization, the organization must execute a certain set of activities in a coordinated manner and do so with consistent discipline. Achieving operational resilience requires an organization to make a long-term commitment to perform certain activities with consistency. The activities involved in operational resilience management must become part of the organization’s daily habits across the enterprise. 2. Understanding the big picture: To be operationally resilient, organizations must address operational risk on many dimensions simultaneously, including people, technology, information, facilities, supply-chain, management, cyber, and physical dimensions. This requires careful planning, coordination, and training across many interdependent domains, as well as understanding how the organization’s capabilities along these dimensions contribute to mission success. 3. Overcoming organizational hurdles: An organization may encounter a number of barriers to operational resilience management, including the vague and abstract nature of operational risk management compartmentalization of operational risk-management activities, such as segmenting responsibilities for information security and business continuity/disaster recovery focusing on technology instead of on all the dimensions listed in Challenge 2 the proliferation of practices for operational resilience management insufficient funding and staff insufficient success stories and measurements (over)reliance on people regulatory climate existing policies the tendency to ignore current information to avoid a painful reality and the need to act competitive pressures or short-term goals Looking Ahead Technology transition is a key part of the SEI’s mission and a guiding principle in our role as a federally funded research and development center.  The next post will in this series will explore recommended practices for managing operational resilience in organizations as well as strategies for deriving more benefits from those recommended practices. We welcome your comments and suggestions on this series. Additional Resources For comprehensive information about CERT's research operational resilience management, please see  www.cert.org/resilience. For more information about frameworks and maturity models, please see Buyer Beware: How to be a Better Consumer of Security Maturity Models presented by Julia Allen and Nader Mehravari at the February 2014 RSA Conference.Richard A. Caralli, Julia H. Allen, and David W. White, also published the book CERT Resilience Management Model (CERT-RMM): A Maturity Model for Managing Operational Resilience by Addison-Wesley Professional, 2011.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:13pm</span>
The design of executive dashboards varies depending upon the needs of the executives for which they are designed. However, well-designed executive dashboards commonly have the following characteristics: Highly graphical in nature and enables the executives to read and understand the key metrics in very little time. Tailored to the needs of the executive who uses them. The VP of Sales probably doesn’t need to see the total inventory turns or human resource information. Starts with a high level view and, by clicking on the relevant graph or map, the user can drill down into more detail. Navigation is easy and intuitive. Automatically updated with the latest available data so you’re not making decisions based on old information. One needs to ensure that the dashboard adheres to the following rules of usability: Relevance - Ensure that only the relevant information is presented at the top level Clarity - Ensure that the data and information are assimilated well and presented in an easy to use way Hierarchy - Ensure that the users of the dashboard are easily able to navigate from high level metrics to the details
Netwoven   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:13pm</span>
By Kevin Fall Deputy Director, Research, and CTOSEI Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management.  In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget. In an effort to offer our assessment of recommended techniques in these areas, SEI built upon an existing collaborative online environment known as SPRUCE (Systems and Software Producibility Collaboration Environment), hosted on the Cyber Security & Information Systems Information Analysis Center (CSIAC) website. From June 2013 to June 2014, the SEI assembled guidance on a variety of topics based on relevance, maturity of the practices described, and the timeliness with respect to current events.  For example, shortly after the Target security breach of late 2013, we selected Managing Operational Resilience as a topic. Ultimately, SEI curated recommended practices on five software topics: Agile at Scale, Safety-Critical Systems, Monitoring Software-Intensive System Acquisition Programs, Managing Intellectual Property in the Acquisition of Software-Intensive Systems, and Managing Operational Resilience. In addition to a recently published paper on SEI efforts and individual posts on the SPRUCE site, these recommended practices will be published in a series of posts on the SEI blog. The first post in this series by Julia H. Allen, Pamela Curtis, and Nader Mehravari, presented challenges for managing operational resilience. This post presents recommended practices for helping organizations manage operational resilience as well as strategies for making the best use of the recommended practices. Recommended Practices for Managing Operational Resilience in Organizationshttps://www.csiac.org/spruce/resources/ref_documents/recommended-practices-managing-operational-resilience 1. Governance and program management. Organizations must oversee and manage the execution of resilience activities. Resilient organizations ensure that all such activities derive their purpose and focus from strategic objectives and critical success factors for operational resilience. The governance and program-management practice ensures that the investment in operational resilience, cybersecurity, service continuity, and other domains is consistent with the organization’s business objectives. This practice entails regular planning, definition of roles and responsibilities, adequate funding, appropriate resource allocations, oversight in executing the plan, and corrections as necessary. In addition, governance and program management involves measuring, analyzing, and reporting the effectiveness of resilience-management practices and implementing improvements. These are all standard business practices for successful, mature organizations, but they are often overlooked when managing operational resilience.  2. Staff preparation and deployment. Organizations must be prepared when a disruptive event occurs. That means making sure that staff at all levels of the organization are trained in how to perform their assigned roles when disruptions occur. Everyone must know his or her role, receive training, and rehearse plans and contingencies. Skill gaps and deficiencies should be identified and training provided to address them. Training can be designed to help meet the goals of resilience management as well as other goals of the organization that depend on interdisciplinary team performance. For example, teams with members drawn from different disciplines and departments can train together in a scenario that encourages interaction, mutual understanding, and building trust among team members. Such training breaks down barriers that otherwise naturally arise when work must be done across disciplines and departments. This practice also encompasses establishing staff backup and redundancy at all levels of the organization. For key personnel, not only it is important to have backups who can step in; organizations should also have identified qualified successors to staff members in key positions if those positions are vacated. Training is not a one-time event. The organization should provide periodic refreshment training for all key functions so that responsibilities and skills are not forgotten in the stress of disruptive events.  3. Communication and awareness. Resilient organizations make establishing and maintaining communications with stakeholders a key objective in all operational resilience-management practices—both during normal operations and during periods of stress. Communication is always important, but it is particularly essential during times of disruption. The organization should plan in advance exactly who will contact whom during and following disruptive events. Plan who will communicate with stakeholders, including both customers and suppliers, to share information and make stakeholders aware of the status of the situation. In addition, develop communication methods (newsletters, email notifications, community meetings, etc.), channels (public relations activities, peer and professional organizations, etc.), infrastructure, and systems (such as emergency alerting via mobile devices).This practice includes both internal and external communication. An organization should report ongoing measurement of operational performance and resilience-management activities and disseminate that information across the enterprise to ensure that all organizational units are operating with an up-to-date picture of the organization’s operations. External communication tasks may require providing information to news media about its resilience efforts or efforts to contain an incident or event. As appropriate, establish responsibility for planning or and executing crisis communications among first responders, other emergency and public service staff, and law enforcement. 4. Risk management. Organizations must identify, analyze, and mitigate risks to assets that could adversely affect the operation and delivery of high-value services. Because an organization cannot protect against every possible threat, risk management involves identifying critical services and operations, identifying the assets that enable their delivery, and prioritizing them. Based on the strategic objectives established in Practice 1, an organization identifies, analyzes, and prioritizes the set of risks that it will monitor and mitigate. This means that some risks will not be addressed, whether intentionally or accidentally. The goal of risk management is to limit exposure to the latter, but an organization can simply accept some risks and monitor them as residual risks (e.g., a price increase for a critical purchased component). In this way, the organization knows that it has an exposure but has attempted to intelligently limit that exposure. Risk management is a continuous process involving identifying new risks, updating the status and disposition of identified risks, determining how to handle the risks (e.g., prevent, mitigate, monitor, or accept), and implementing the selected risk-handling option. For most organizations, this includes cyber risks—and, more specifically, software vulnerabilities and malware. A large body of work by the Software Engineering Institute and the MITRE Corporation describes specific vulnerabilities and software weaknesses. In particular, MITRE has established a large resource in its Common Vulnerability and Enumeration (CVE) repository, where it makes classes of vulnerabilities and solutions available. 5. Incident management. Incident management is one of the disciplines that most naturally comes to mind when one considers operational resilience management. It is the end-to-end handling of a disruptive event from the time that something happens to when it is detected, triaged, and resolved. Disruptive events include deliberate or inadvertent harmful actions of people, failed internal processes, technology failures, and external events such as natural disasters and power outages. Implementing this practice begins before an incident occurs, when an organization plans for and assigns roles and responsibilities, including those for key stakeholders and decision makers (for escalation). Operational staff should be trained not only in delivering the services and conducting the operations for which they have responsibility but also in the results and effects to expect from performing these services and operations. Operational staff are often the first staff capable of detecting an incident; thus such training should make them more sensitive to unexpected deviations from "normal" results and effects. Once an incident is detected, the first step is to carefully note the circumstances of the incident, declare the incident, and preserve evidence. The organization may have prepared an immediate workaround for just such an incident. If so, that workaround is often implemented by the same staff who detect the incident. Otherwise, the organization analyzes the incident to develop an appropriate response, including recovery actions that minimize the disruption. When analyzing the incident, the incident-handling team looks for patterns or similarities to other incidents that they may have seen in the past. The organization may perform a root-cause analysis and identify and evaluate multiple candidate solutions. The next steps are to implement the solution—respond and recover. The incident-handling team should also ensure that the organization communicates with key stakeholders, who can provide needed resources and expertise immediately or later in the incident resolution. Once the incident is closed, the organization should conduct a postmortem analysis to determine if the organization should make any improvements to its overall incident management, risk management, and service delivery (operations) processes. The organization should define measures to help evaluate the effectiveness of its responses to disruptive incidents. It will analyze those measures of effectiveness to determine where to improve its practices. 6. Service continuity. This practice entails ensuring the continuity of essential operations and services during and following a disruptive event. Service continuity may include business continuity, disaster recovery, crisis management, and pandemic planning. Activities encompassed by this practice include developing service-continuity plans, assigning roles and responsibilities, and then testing plans and running exercises to ensure that the plans are robust. For example, the organization should establish plans about what to do with its workforce if it must evacuate its facility and stand up an alternative facility to continue operations. Tests and exercises can cover a wide range of activities and may include computer simulations. Organizations should ensure the continuity of the services they provide through careful preparation and planning. The resilient organization tracks the location of key personnel and backup personnel, so that in the event of an incident, they can put recovery plans into action. Through exercises and drills, the organization assures that everyone knows his or her roles. When a Hurricane Sandy happens, the resilient organization does what it has rehearsed. 7. Critical asset protection. Critical assets (e.g., information, technology, facilities) that support high-value services must be identified, protected, and maintained. In particular, an organization must ensure that it applies adequate controls to protect the confidentiality, integrity (i.e., information security), and availability of information essential or entrusted to the business. Such controls can include maintaining an up-to-date inventory of the information that the organization must protect, on what devices that information resides, and over what networks it may be transmitted. In addition, an organization should have practices for configuring, tracking, protecting, and maintaining its IT assets (e.g., workstations, laptops, mobile devices, and network components). Protecting critical assets requires continually identifying and mitigating threats to the asset (e.g., as part of a comprehensive risk-management practice, discussed in Practice 4); improving, retiring, and adding new controls to the asset to maintain its integrity; and establishing appropriate identity and access management to limit access to the asset. Critical asset protection also includes facility protection, such as for an organization’s IT assets, and includes facilities for backup and recovery. 8. External-dependencies management. An organization must identify and manage dependencies on external entities, such as its supply chain. Key elements of this practice include prioritizing external dependencies, managing risks arising from external dependencies, and formalizing relationships with external entities. Organizations should make sure that formal and contractual agreements are in place with external entities and that everyone understands what is expected from each party, in particular with respect to disruptions in delivery of critical components or services. To ensure preparedness, an organization should proactively monitor and manage the performance of external entities to make sure they meet expectations. 9. Secure software development and integration. Organizations must ensure that software that enables or performs the delivery of critical services and operations satisfies resilience requirements. An organization derives resilience requirements for such software in part from its resilience-management activities, including governance and program management (Practice 1), service continuity (Practice 6), and critical asset protection (Practice 7). For example, mitigating a particular threat to an asset may impose resilience (and security) requirements on the software that controls it or access to it. An organization should also elicit or collect requirements from stakeholders, including customers, end users, suppliers, other partners, and regulatory authorities. Multiple frameworks provide recommended practices for software development that address security and other resilience-related topics (see Learn More for more information). Many of the challenges noted for practices at the top of this webpage apply to the practices described in such frameworks as well. How to derive more benefit from the recommended practices for managing operational resilience? 1. Coordinate the implementation of these practices. Implementing these practices requires competence in several disciplines (incident management, asset protection, risk management, etc.). Organizations that create a separate solution or team to deal with each practice will find their operational resilience-management activities to be inefficient and difficult to manage due to the overlaps (e.g., where do incident management, disaster recovery, and asset protection and sustainment begin or end?). Just as the implementation of each operational resilience-management practice should be driven by business objectives, so should their collective implementation. Organizations will improve their operational resilience by taking an integrated approach to implementing these activities and ensuring that there is adequate coordination among them. Begin by gathering representatives from the different discipilines and departments to develop end-to-end scenarios that describe how the organization should respond to particular threats (as described in Practice 2). Identify which disciplines or departments (e.g., incident analysis, disaster recovery, and crisis communication) to involve at each stage of the response, including afterward, when making improvements to processes and training for service delivery, service continuity, and information security. Then determine how the organization should coordinate its activities in such scenarios. Such rehearsals or simulations help identify superior ways to implement the operational resilience-management practices. The following diagram may help you remember the purpose of each resilience-management practice. The two practices in the "Stop the bleeding" row deal primarily with resolving incidents. The "Improve and manage" row of the diagram depicts the practices that provide infrastructural and foundational support for establishing, facilitating, measuring, and improving asset protection and operations sustainment activities. The position of those practices in the diagram also indicates their role in protecting and sustaining the health of the organization and continually improving operational resilience-management activities. The diagram illustrates the need for all the operational resilience-management practices to work together. 2. Maintain currency with relevant standards. In the past 10 years, standards have exploded across all disciplines in national and international efforts to deal with the growing number of cybersecurity failures. The number of standards dealing with preparedness planning has quadrupled since 2005. An organization should develop an integrated approach to updating its processes to maintain compliance with standards relevant to its business. For example, when ISO/IEC Standard 27034 Information Technology—Security Techniques—Application Security was published, its guidance affected business managers, IT managers, developers, auditors, and end users. An organization should involve designers, programmers, acquisition managers, IT staff, and users to determine what changes are needed to preserve the effectiveness of operational resilience-management activities while addressing this standard. 3. Understand compliance issues. Compliance issues affect all the recommended practices. An organization must not only follow federal and state legislation and regulations but also be aware that state-by-state differences exist. For example, state requirements vary for notifications about data breaches, and this will inform the organization’s communication practices. However, an organization should view compliance as an outcome of an integrated operational resilience-management program, not a goal. Simply following a rule may not be sufficient to plan for and mitigate risk; new risks arise much faster than the rate of legislation. Looking Ahead Technology transition is a key part of the SEI’s mission and a guiding principle in our role as a federally funded research and development center. The next post will in this series will present recommended practices for the software development of safety-critical systems. We welcome your comments and suggestions on this series in the comments section below. Additional Resources For comprehensive information about CERT's research operational resilience management, please see www.cert.org/resilience. For more information about frameworks and maturity models, please see Buyer Beware: How to be a Better Consumer of Security Maturity Models presented by Julia Allen and Nader Mehravari at the February 2014 RSA Conference. Richard A. Caralli, Julia H. Allen, and David W. White, also published the book CERT Resilience Management Model (CERT-RMM): A Maturity Model for Managing Operational Resilience by Addison-Wesley Professional, 2011. A detailed list of resources for managing operational resilience, frameworks and maturity models, risk management, external dependencies management, resilience engineering, operational resilience management, and resilience policy development, please visithttps://www.csiac.org/spruce/resources/ref_documents/recommended-practices-managing-operational-resilience.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:12pm</span>
                                                                                                                                                         In just a few weeks, our profession will gather at the premier HR event...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:12pm</span>
With the recent announcement of Yammer’s purchase by Microsoft, the Social Media software companies are already scrambling to figure out their strategies.  Just last week Oracle announced the acquision of Involver, a social media comany.  This would be Oracle’s third acquisition of a social media company in the last 2 months. Salesforce has also acquired Buddy Media in the social media space.  This has left leaders like Jive scrambling to determine their next move. We at Netwoven are pretty excited about this acquisition and the impact it will make to our customer’s Social media strategies.  Microsoft’s brilliant move is remniscent of the acquisitions Microsoft made several years ago in the Collboration, Web Content Management, and Business Intelligence space.  For readers, here’s a summary of the moves Microsoft made and the impact on the technology landscape: Collaboration and Document Manaement In 2003 Microsoft released SharePoint with integrated portal and collaboration features.  This hastened the acquisition of eRoom by Documentum to better compete in this integrated environment.  Later Documenum was purchased by EMC.  Filenet and Hummingbird were also later acquired by different companies. Today, there is no standalone document management system vendors out there.  They are integrated in a smart suite (as gartner calls it). Web Content Management In early 2000, Microsoft purchased nCompass Labs to improve its web content management capabilities.  This led to the acquisition of Reddot by Livelink, followed by acquisition of Interwoven by Autonomy. Today, there are no standalone web content management system vendors out there.  They are integrated into a suite. Enterprise Search In late 2000, Microsoft purchased the FAST Search engine to improve its search capabilities.  This has already shaken the entire landscape.  Oracle has purchased several companies to better compete with Microsoft.  These companies include Stellant, and Endeca.  Opentext has purchased several companies to better compete.  Autonomy has purchased Interwoven to better compete in the integrated environment. Today, there are still a few standalone search vendors remaining.  It remains to be seen if they are able to stay independent or will be acquired by other software giants. Business Intelligence Microsoft made its first acquisition of an OLAP engine in the late 1990s.  Since then it has made several acquisitions in the BI space that has changed the complete landscape.  Today Microsoft’s BI products are tightly integrated with SharePoint.  This has led to acquisition of most of the major BI vendors.  These vendors include:  Cognos, Business Objects, Crystal Reports, Hyperion, Brio and many others. Today, there are very few standalone BI vendors remaining.  The BI landscape is constantly changing so there is an expectation that innvoation will continue in this space as Big Data is becoming a big issue for enterprises to deal with. In my opinion, the purchase of Yammer will have a significant impact equal to or greater than the other purhases and will begin the cycle of further innovations. About the Author This article is written by Niraj Tenany, President and CEO of Netwoven and a Information Management practitioner.  Niraj works with large and medium sized organizations and advises them on Enterprise Content Management and Business Intelligence strategies.  For additional information, please contact Niraj at ntenany@netwoven.com .
Netwoven   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:11pm</span>
  Yesterday, the United States Supreme Court, in an 8-1 decision, ruled that an employer that does not know that a job applicant may need a religious accommodation can discriminate against that job applicant. All that matters are the employer’s motivations. Allow me to explain. It’s not what you know; it’s what motivates you. In EEOC v. Abercrombie & Fitch, the national apparel chain declined to hire Samantha Elauf, a practicing Muslim, because she wore a headscarf. Ms. Elauf wore the headscarf for religious reasons, but never told Abercrombie that she needed a religious accommodation for her headscarf. Still, Abercrombie assumed...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:11pm</span>
By Sarah SheardMember of the Technical StaffSoftware Solutions Division This post is the first in a series introducing our research into software and system complexity and its impact in avionics. On July 6, 2013, an Asiana Airlines Boeing 777 airplane flying from Seoul, South Korea, crashed on final approach into San Francisco International airport. While 304 of the 307 passengers and crew members on board survived, almost 200 were injured (10 critically) and three young women died. The National Transportation Safety Board (NTSB) blamed the crash on the pilots, but also said "the complexity of the Boeing 777’s auto throttle and auto flight director—two of the plane’s key systems for controlling flight—contributed to the accident." In a news report, acting NTSB chairman Christopher Hart stated that "The flight crew over-relied on automated systems that they did not fully understand." The NTSB report on the crash called for "reduced design complexity" and enhanced training on the airplane’s autoflight system, among other remediations. Since complexity is a vague concept, it is important to determine exactly what it means in a particular setting. This blog post describes a research area that the Carnegie Mellon University Software Engineering Institute (SEI) is undertaking to address the complexity of aircraft systems and software. The Growing Complexity of Aircraft Systems and Software The growing complexity of aircraft systems and software may make it difficult to assess compliance to air worthiness standards and regulations. Systems are increasingly software-reliant and interconnected, making design, analysis, and evaluation harder than in the past. While new capabilities are welcome, they require more thorough validation and verification. Complexity could mean that design flaws or defects could lead to unsafe conditions that are undiscovered and unresolved. In 2014, the Federal Aviation Administration (FAA) awarded the SEI a two-year assignment to investigate the nature of complexity, how it manifests in software-reliant systems such as avionics, how to measure it, and how to tell when too much complexity might lead to safety problems and assurance complications.  System Complexity Effects on Aircraft Safety Our examination of the effects of system complexity on aircraft safety began in October 2014 and involved several phases, including an initial literature review of complexity in the context of aircraft safety.  Our research is addressing several questions, including What definition of complexity is most appropriate for software-reliant systems? How can that kind of complexity be measured?  What metrics might apply? How does complexity affect aircraft certifiability, validation, and verification of aircraft,their systems, and flight safety margins? Given answers to the metrics questions above, we will then identify which of our candidate metrics would best measure complexity in a way that predicts problems and provides insight into needed validation and certification steps. Other questions our research will address include: Given available sources of data on an avionics system, can measurement using these metrics be performed in a way that provides useful insight? Within what measurement boundaries can a line be drawn between systems that can be assured with confidence and those that are too complex to assure? The remainder of this post focuses on the findings of our literature review, which offered insights into the causes of complexity, the impacts of complexity, and three principles for mitigating complexity. Causes of Complexity While complexity is often blamed for problems, the term is usually not defined. When we performed a systematic literature search, we found this to be the case. As a result, our literature search broadened from simply collecting definitions to describing a taxonomy of issues and general observations associated with complexity.  (This work was primarily performed by my colleague, Mike Konrad.) Our literature review revealed that complexity is a state associated with causes that produce effects. We have a large taxonomy of different kinds of causes and another taxonomy of different kinds of effects. To prevent the impacts that complexity creates, one must reduce the causes of complexity, which typically include: Causes related to system design (the largest group of causes). Components that are internally complex add complexity to the system as a whole. Also, the interaction (whether functional, data, or another kind of interaction) of the components adds complexity. Dynamic behaviors also add complexity, including numbers of configurations and transitions among them. The way the system is modeled can add complexity as well. Causes that make a system seem complex (i.e., reasons for cognitive complexity). These causes include the level of abstractions required, the familiarity a user or operator (such as the pilot) has with the system, and the amount of information required to understand the system. Causes related to external stakeholders. The number or breadth of stakeholders, their political culture, and range of user capabilities also impact complexity. Causes related to system requirements. The inputs the system must handle, outputs the system must produce, or quality attributes the system must satisfy (such as adaptability or various kinds of security) all contribute to system complexity. In addition, if any of these change rapidly, that in itself causes complexity. Causes related to the speed of technological change. The added pressure that more capable, software-reliant systems place on technologies to accomplish even more also impacts complexity. Causes related to teams. The necessity and difficulty of working across discipline boundaries, and of creating process maturity in a rapidly evolving change also contributes to complexity. Impacts of Complexity After a system is deemed complex—no matter the reason—it is important to examine the problems or benefits of that complexity. Many consequences of complexity are known and considered to be negative, including higher project cost, longer schedule, and lower system performance, as well as decreased productivity and adaptability. Also, addressing critical quality attributes (e.g., safety versus performance and usability) in a system, or achieving a desired tradeoff between conflicting quality attributes, often results in additional design complexity. For example, to reduce the probability of a hardware failure causing an unsafe condition, redundant units are frequently designed into a system. The system then not only has two units instead of one, but it also has a switching mechanism between the two units and a way to tell whether each one is working. This functionality is often supported by software, which is now considerably more complex than when there was just one unit. Complexity also impacts human planning, design, and troubleshooting activities in the following ways: Complexity makes software planning harder (including software lifecycle definition and selection). Complexity makes the design process harder. For example, existing design and analysis techniques may fail to provide adequate safety coverage of high performance, real-time systems as they become more complex. Also, it may be hard to make a safety case just from design and test data, making it necessary to wait for operational data to strengthen the safety case. Complexity may make people less able to predict system properties from the properties of the components. Complexity makes it harder to define, report, and diagnose problems. Complexity makes it harder for people to follow required processes.   Complexity drives up verification and assurance efforts and reduces confidence in the results of verification and assurance. Complexity makes changes harder (e.g., in software maintenance and sustainment). Complexity makes it harder to service the system in the field. Three Principles for Mitigating Complexity Our literature review also identified three general principles for mitigating complexity: Assess and mitigate complexity as early as possible. Focus on what in the system being studied is most problematic; abstract a model; and solve the problem in the model. Begin measuring complexity early, and when sufficient quantitative understanding of cause-effect relationships has been reached (e.g., what types of requirements or design decisions introduce complexity later in system development), establish thresholds that, when exceeded, trigger some predefined preventive or corrective action. Of course, these principles overlap, and are expressed and sorted differently by different authors. One could make the case that, in general, all practices of systems engineering started from a principle of managing complexity. The table below highlights some of the principles for managing complexity that we encountered during our literature review. Looking Ahead Our literature review also describes literature search results related to measurement and mitigation of complexity. Measurement and mitigation of complexity are the basis for the practice of good systems engineering, whether addressing collaboration among organizations and disciplines, requirements and systems abstractions and models, disciplined management and engineering, or even modular design, patterns, and refactoring. The next blog post in this series will detail the second phase of our work on this project, which focuses on determining how the breadth of aspects of complexity can be measured in a way that makes sense to system and software development projects, and specifically for aircraft safety assurance and certification. We welcome your feedback on our research. Additional Resources To read the SEI report Reliability Improvement and Validation Framework, by Peter H. Feiler, John B. Goodenough, Arie Gurfinkel, Charles B. Weinstock, and Lutz Wrage, please click here. The blog post is disseminated by the Carnegie Mellon University Software Engineering Institute in the interest of information exchange. While the research task is funded by the Federal Aviation Administration, the United States Government assumes no liability for the contents of this blog post or use thereof. The United States Government does not endorse products or manufacturers. Trade or manufacturer's names appear herein solely because they are considered essential to the objective of this blog post. The interpretations, findings, and conclusions in this report are those of the author(s) and do not necessarily represent the views of the funding agency. This document does not constitute FAA certification policy.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:10pm</span>
The Office 15 release is going to be awesome! Here are some of my notes on the Office 15 features from the presentation: The office ribbon is hidden by default All clients think cloud first, then local (skydrive, roaming profiles in the cloud) Inline reply (like gmail) Peeks (image says it all) Office clients all tie into a marketplace in the cloud, reminds me of an idea that Zaplet had back in the 2000′s Tablet radial menu Signing into your office application PDF Editing Post documents directly to FaceBook "Toasts" remind you where you last were in the document Office is now social (lots of focus on "Communication scenarios") SharePoint Mysites now suggests documents you might want to follow SharePoint Mysites allows you to use hash tags SharePoint "People cards" allow you to see all social networks that the Outlook (and office) has skype integration Flash fill allows excel worksheets to autofill across Rows auto-magicly More to come once I get back to my desk
Netwoven   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:10pm</span>
By Julien DelangeMember of the Technical StaffSoftware Solutions Division Using the Architecture Analysis & Design Language (AADL) modeling notation early in the development process not only helps the development team detect design errors before implementation, but also supports implementation efforts and produces high-quality code. Our recent blog posts and webinar have shown how AADL can identify potential design errors and help avoid propagating them through the development process, where remediation can require massive re-engineering, delay the schedule, and increase costs. Verified specifications, however, are still implemented manually, which can lead to additional errors and might break previously verified assumptions and requirements. For these reasons, code production should be automated to preserve system specifications throughout the development process. This blog post focuses on generating code from AADL and generating configuration files for ARINC653 systems, which are used by the avionics community. Avionics and other safety-critical systems are becoming increasingly reliant on software. For example, the F-35 Lighting II is a fifth-generation fighter jet that contains more than 8 million lines of software code (LOC), four times the amount of the world’s first fifth-generation fighter, the F-22 Raptor. This upsurge in software reliance motivates the need to verify and validate requirements early in the software development lifecycle, as requirements errors are often propagated from the design phase to the implementation phase.  The remainder of this blog post gives examples of how AADL is being used to generate code for software-reliant avionics systems. AADL Model Overview AADL was approved and first published as SAE Standard AS-5506 in November 2004. Version 2.1 of the standard was published in Sept 2012. AADL uses the Open Source AADL Tool Environment (OSATE), an Eclipse-based modeling framework, to design, validate, and analyze AADL models. AADL is designed for the specification, analysis, automated integration, and code generation of real-time distributed computer systems with performance-critical requirements such as timing, safety, schedulability, fault tolerance, and security. AADL provides a new tool set to allow analysis of system and system-of-system designs prior to development and supports a model-based, model-driven development approach throughout the system lifecycle. As described in our earlier blog posts, AADL can lower development and maintenance costs by providing a standard, precise syntax and semantics for performance-critical systems, so that documentation can be well defined providing the ability to model large-scale architectures from multiple contractors in a single analyzable model that can be incrementally refined capturing the "architectural API (application programming interface)" needed to evaluate the effects of change, such as the emergent properties of integration (e.g., safety, schedulability, end-to-end latency, and security) allowing early and complete lifecycle tracking of modeling and analysis complementing functional simulation with analysis of system structure and runtime behavior providing the basis to establish a reference architecture and support product lines ARINC 653 Overview ARINC 653 (Avionics Application Standard Software Interface) is an avionics standard that defines the execution platform of software that focuses on safety and determinism. ARINC653 executes software components in separated partitions so that an error that occurs within one partition cannot impact the others. This isolation is done at two levels: Time: Each partition has a fixed, preconfigured execution time slot and cannot overuse it. Space: Each partition has a memory segment to store its code and data. Partitions can communicate only with predefined and configured communication channels, and any attempt to open an unspecified channel raises an exception. The ARINC653 standard specifies an API to create and manage software resources so that a software developer can switch from one operating system (OS) to another without significant development effort. ARINC653-compliant systems must be carefully designed to ensure that resources are correctly configured and allocated to partitions. For example, system integrators must check that a partition’s execution time is sufficient to execute system tasks or that no other communication channel will disturb the execution of critical functions. These stringent resource management requirements motivate the need for carefully specifying partitions with appropriate notation—such as AADL and its annex dedicated to ARINC653 systems, the ARINC653 annex—and analyzing to make sure that they meet designers’ requirements (time, safety, etc.). Ultimately, the models should be automatically processed by dedicated tools to configure the ARINC653 platform, avoiding hazardous manual activity, such as translating textual specifications to code. Using AADL to Generate Code for Avionics Systems Our work in the area of avionics systems focuses on generating code from AADL, which produces code for ARINC653 systems from verified models. Our approach generates the code for the different layers of the system: the ARINC653 module, which ensures time and space isolation of partitions, and its associated partitions, which contain resources to execute functional code. Auto-configuring the system from the models allows developers to use the same models for other analyses. Ultimately, automatically deriving the implementation code from the architecture model ensures that the implementation will comply with other generated materials. Moreover, auto-generating the code avoids errors introduced by manually written code. OSATE is able to generate ARINC653-compliant C code from the AADL model using a bridge to the Ocarina code generator. The generated code creates all the resources of a partition—including tasks, mutexes, and communication channels—that execute the functional code, whether it is designed using pure C code or functional modeling languages such as SCADE or Simulink. In addition to creating the partition code, OSATE can configure the underlying separation kernel to execute each partition—such as the scheduling parameter and inter-partition communication channels—at runtime. OSATE, and its Ocarina bridge, is able to generate the configuration file for two ARINC653 systems: DeOS and VxWorks653. By auto-producing the ARINC653 configuration artifacts, the user no longer needs to configure anything manually, which avoids traditional development errors such as misunderstanding  requirements or specifications and making manual coding errors. In addition, using the same model throughout the development process ensures that the configuration file reflects the architecture that was validated and analyzed during previous development steps. Two AADL Code Generation Case Studies AADL models are edited with the OSATE toolset, and the code is generated using the Ocarina AADL code generator tool. We integrated the generated code from models into two commercial ARINC653 operating systems: Deos from DDC-I VxWorks653 from Windriver Our intent was to automate the code production from the AADL models. Once AADL tools have validated the models, the system can be automatically deployed on top of different operating systems while preserving the characteristics that AADL validated when it analyzed the model. The remainder of this post describes our two applications: The ADIRU Example: generating ARINC653 XML configuration and C partition code from AADL The SCADE Example: generating code from functional (SCADE) and AADL models and integrating functional models with code generated from AADL models We applied both operating systems on these applications to demonstrate the capability of AADL to generate code on a variety of OS platforms. The ADIRU model represents an air data inertial reference unit (ADIRU) system and tries to reproduce a faulty component. The model has been presented to an AADL committee meeting, and the slides are available here. The model is composed of four main partitions: one to simulate the sensors, two for health monitoring, and another one for the solver. We used the AADL model to analyze the ADIRU model and generate the module configuration and partition code. SCADE captures the functional code—the code that corresponds to the subprograms—using C code, and we use the AADL code generator to produce the execution platform code and integrate it into an ARINC653 operating system. Inter-partition communications use AADL data and event data ports, which are translated into ARINC653 queuing and sampling ports in the ADIRU model. In the SCADE example, we generated code from SCADE that will be integrated on top of the code generated from the AADL model. The SCADE model is composed of several partitions: panel: simulates the joystick and on/off buttons from the panels that are sent to the SCADE node. We use the value 5.0 for the joystick and on for the button. sensors: simulate the sensor values sent to the panel. For this demo, we use a value of 500 for the left sensor and ?200 for the right. roll control: executes the code generated from SCADE with the inputs from the panel and sensor partitions and then sends the result to the display partition. display: simulates a display that shows if there is a warning from the left or right sensor. In the present case, according to the input values, the left-sensor warning should be activated. The goal of this experiment was to test that the model exhibits the same behavior, such as task execution order and value produced or received, in either the Deos or the VxWorks653 operating system. This assurance is very important because switching from one OS to another can be challenging: each one has specific features (i.e., task scheduling) that might change the execution behavior (task execution order). Our tool auto-generated the configuration file and preserved the execution semantics of the verified model. This demo shows the correct integration of software models: the values of the system that integrates the SCADE model are the same as the values obtained when OSATE simulated the model, showing a correct integration of the code. We captured these case studies in several videos to demonstrate the SCADE model, the AADL model, and how to generate and integrate code for both Deos and VxWorks653. Please click here to view the demos. Wrapping Up and Looking Ahead The project described in this blog post focuses on auto-generating ARINC653 module configuration and the partitioning of code from verified AADL models. Our approach not only creates the necessary code to configure the execution platform but also integrates functional models (i.e., SCADE), enabling a full zero-code development approach. The ARINC653 system was initially designed with a focus on safety. We are planning to extend and apply this work on a multiple, independent levels of security (MILS) platform, an operating system that implements software isolation for security purposes. Auto-generating the OS configuration ensures the enforcement of the security policy, such as isolation between classified and unclassified data, from the model to the code. Additional Resources For more information about AADL, please visithttp://www.aadl.info/. To view our webinar, Architecture Analysis with AADL, please visithttp://www.sei.cmu.edu/webinars/view_webinar.cfm?webinarid=424907&gaWebinar=ArchitectureAnalysiswithAADL.  
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:09pm</span>
                    Here are my top 10 words or expressions that none of us should dare say at the Annual Convention under penalty of listening to Barry Manilow for 24 hours straight while reading the FMLA intermittent regulations: 10.       Matrix 9.         Synergistic alignment 8.         Sea change 7.         Paradigm shift 6          Knowledge share 5.         Change agent 4.         Value proposition 3.         Leverage best practices...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:09pm</span>
By Chris TaschnerSenior Research Engineer CERT Cyber Security Solutions DirectiveThis post is the latest installment in a series aimed at helping organizations adopt DevOps. Container-based virtualization platforms provide a means to run multiple applications in separate instances. Container technologies can provide significant benefits to DevOps, including increased scalability, resource efficiency, and resiliency. Unless containers are decoupled from the host system, however, there will be the potential for security problems. Until that decoupling happens, this blog posting describes why administrators should keep a close eye on the privilege levels given to applications running within the containers and to users accessing the host system. Containers have become the hot new technology in DevOps. One company in particular, Docker, has emerged as the go-to provider for container technology. Using the Docker platform, an application can be packaged into a unit referred to as an image, along with all its dependencies. Docker can then run instances of that image. Each instance resides within a container. Docker is becoming synonymous with DevOps. If you are unfamiliar with the benefits of containers, in a nutshell they include the readily available images and easy-to-use public repository, image versioning, and the application-centric nature of Docker. (For more information see Three Reasons We Use Docker on devops.com.) Containers also offer a lot of benefit when it comes to their size. Unlike a virtual machine, a container doesn’t need the full operating system running or a virtual copy of all of the system’s hardware. The container only needs enough of the operating system and hardware information to run the application that it is responsible for. As a result, the container can be much smaller than a virtual machine, so a host system can run far more containers than virtual machines. To minimize what the container has to run, however, there are trade-offs. One of these trade-offs is less separation between the container and the host system. In contrast, virtual machines contain much more separation from the host than a container. The Docker user requires root privileges to run the containers. Problems can arise if the Docker user does not understand what is running in the container. Often these repositories are not vetted, meaning anyone can create and post an image. Obviously, there are security implications associated with putting too much trust in containers downloaded from the internet. The problem of shared namespaces is often cited as one of the largest problems with Docker. A namespace refers to groups created by the kernel that designate access levels for different resources and areas in the system. The reason that Docker does not have different namespaces for each of its containers is scaling—if you have hundreds of containers running, each must have its own namespace. In addition, if a container wants to share storage, all namespaces sharing that storage must have explicit access to the storage. In response to some of these security concerns Docker has come out with an article detailing some concerns about how they attempt to mitigate possible issues. Included in these mitigations is guidance for limiting permissions to the server both by those with access directly to the host and by applications running within the containers. In addition to Docker’s security guidance there are others that have chimed in to provide help in securing containers. One potential solution to the issue of shared namespace is to use Seccomp, a process security tool. Daniel Walsh detailed this work-around on opensource.com. Administrators should always know exactly what your containers are running. Images downloaded from Internet repositories should be carefully vetted before being run in any sensitive environments. As a general rule, unlike the name implies, containers shouldn't be expected to contain running applications within the container. Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits please click here. To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here. To listen to the podcast DevOps—Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here. To read all of the blog posts in our DevOps series, please click here.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:09pm</span>
By Kevin Fall Deputy Director, Research, and CTOSEI Software and acquisition professionals often have questions about recommended practices related to modern software development methods, techniques, and tools, such as how to apply agile methods in government acquisition frameworks, systematic verification and validation of safety-critical systems, and operational risk management.  In the Department of Defense (DoD), these techniques are just a few of the options available to face the myriad challenges in producing large, secure software-reliant systems on schedule and within budget.In an effort to offer our assessment of recommended techniques in these areas, SEI built upon an existing collaborative online environment known as SPRUCE (Systems and Software Producibility Collaboration Environment), hosted on the Cyber Security & Information Systems Information Analysis Center (CSIAC) website. From June 2013 to June 2014, the SEI assembled guidance on a variety of topics based on relevance, maturity of the practices described, and the timeliness with respect to current events.  For example, shortly after the Target security breach of late 2013, we selected Managing Operational Resilience as a topic. Ultimately, SEI curated recommended practices on five software topics: Agile at Scale, Safety-Critical Systems, Monitoring Software-Intensive System Acquisition Programs, Managing Intellectual Property in the Acquisition of Software-Intensive Systems, and Managing Operational Resilience. In addition to a recently published paper on SEI efforts and individual posts on the SPRUCE site, these recommended practices will be published in a series of posts on the SEI blog.  This post, the first in a series by Peter Feiler, Julien Delange, and Charles Weinstock, presents the challenges in developing systems for safety-critical systems and then introduces the first three technical best practices for the software development of safety-critical systems. The second post in the series will present the remaining five practices. Safety-Critical (SC) Systems - SPRUCE / SEIhttps://www.csiac.org/spruce/resources/ref_documents/safety-critical-sc-systems-spruce-sei Our discussion of technical best practices for the software development of safety-critical (SC) systems has four parts. First, we set the context by addressing the questions "What are SC systems and why is their development challenging?" Three of the eight technical best practices for SC systems are presented below. We then briefly address how an organization can prepare for and achieve effective results from following these best practices. In addition, we have added links to various sources to help amplify a point; please note that such sources may occasionally include material that differs from some of the recommendations below. Every organization is different; judgment is required to implement these practices in a way that provides benefit to your organization. In particular, be mindful of your mission, goals, existing processes, and culture. All practices have limitations—there is no "one size fits all." To gain the most benefit, you need to evaluate each practice for its appropriateness and decide how to adapt it, striving for an implementation in which the practices reinforce each other. Monitor your adoption and use of these practices and adjust as appropriate. What are SC systems and why is their development challenging? Software systems are getting bigger and more crucial to the things we do. The focus here is on SC systems—systems "whose failure or malfunction may result in death or serious injury to people, loss or severe damage to equipment, or environmental harm." Examples of SC systems include systems that fly commercial airliners, apply the brakes in a car, control the flow of trains on rails, safely manage nuclear reactor shutdowns, and infuse medications into patients. If any of these systems fail, the consequences could be devastating. We briefly expand on several examples below. Today we take for granted "fly-by-wire" systems, in which software is placed between a pilot and the aircraft's actuators and response surfaces to provide flight control, thereby replacing wearable mechanical parts and providing rapid real-time response. Fly-by-wire achieves levels of control not humanly possible, providing "flight envelope protection" in which the aircraft's behavior around a specifiable envelope of physical circumstances (specific to that aircraft) can be accurately predicted. Pilots train on the fly-by-wire system to fly that type of aircraft safely; therefore, the loss of fly-by-wire capabilities reduces safety. To provide a medical device example, the FDA is taking steps to improve the safety of infusion pumps, whose use in administering medication (or nourishment) has become a standard form of medical treatment. Infusion pump malfunctions or their incorrect use have been linked to deaths (see "FDA Steps Up Oversight" and "Medtronic Recalls Infusion Pump"). The experience with infusion pumps has similar implications for other medical devices, such as pacemakers and defibrillators.SC systems are increasingly software-reliant, pervasive, and connected. This properties present a challenge to current development practices to successfully develop and evolve such systems while continuing to satisfy real-time and fail-safe performance. The practices covered here are intended to address such objectives as the following: rigorously anticipating and addressing scenarios for how the system might fail (and not just the typical "sunny-day scenarios") identifying defects that can lead to failure early in the lifecycle, since identifying them later in the lifecycle is generally much more expensive to correct maintaining an appropriate specification of the system requirements and architecture that summarizes what the system must do and how it must do it, which experts in nonfunctional quality attributes (timing, security, etc.) can subject to analysis ensuring that the system is evolvable and developable in increments (requirements and solutions may change) Technical Best Practices for Safety-Critical Systems 1. Use quality attribute scenarios and mission-thread analyses to identify safety-critical requirements. SC requirements are typically documented through some combination of quality attribute scenarios and mission-thread workshops. A quality attribute scenario is an extended use case that focuses on a quality attribute, such as performance, availability, security, safety, maintainability, extensibility, or testability. A mission thread is a sequence of end-to-end activities and events, given as a series of steps that accomplish the execution of one or more capabilities that the system supports. Surveys and analyses of product returns and legal actions can help identify safety and related operational concerns with existing products. In the infusion pump example, faults account for a reported 710 deaths. Like other systems, despite best efforts, SC systems may still fail, but the failure must be handled in a graceful way that protects the main asset—human lives, property, or the environment. For example, in the case of an infusion pump, the definition of a graceful failure depends on the circumstances: in some cases treatment should stop, while in other cases, such as intravenous feeding and chemotherapy, halting the treatment entirely may be more dangerous than putting out too much volume. Clearly, different failure scenarios may require different outcomes. The Quality Attribute Workshop (QAW) is one mechanism for eliciting SC quality attribute scenarios and identifying and specifying SC requirements. Challenging mission-critical requirements that create the need for novel solutions are a principal source for SC requirements. For example, high-performance military aircraft, such as the F-117 Nighthawk and the B-2 Spirit flying wing, are designed to be highly aerodynamic and highly maneuverable, qualities that are achieved by transferring stability requirements from the pilot to the flight-control software. It is no longer possible for humans to fly these aircraft unaided; instead the aircraft are largely flown by the flight-control software, which must be at least as reliable as a pair of pilots would be. The effort to identify SC requirements is ongoing and tied to the other eight practices. For instance, when developing assurance cases, it is important to provide justification that the product design or development process addresses a particular failure scenario. 2. Specify safety-critical requirements, and prioritize them. This practice highlights a few of the many important considerations in the specification of SC systems. An example of a fuller set of considerations can be found in the FAA Requirements Engineering Management Handbook. For the SC system, specify mission-critical requirements (function, behavior, performance) using, for example, state-machine representations of behavior: UML state charts, Simulink state flow, or scenario-driven threads through system functions to help derive a system's behavioral requirements safety-critical requirements (safety, reliability, security) as described in Practice 1 Inherent to the specification of a quality attribute is some kind of measure of the desired outcome, which aids in specifying the intended outcome in a scenario with greater clarity and assessing success with greater objectivity. In fact, quality attribute scenarios require some unit of measure. Measures are also important when specifying SC requirements; it is important to utilize or introduce some measure of behavior or performance as a first step to setting a threshold. Such measures can often be established by thinking through what an alternative or current approach requires: returning to our flight-control example, the probability of both the pilot and copilot suffering heart attacks over a ten-hour mission is about 10^(-9), and this establishes a reliability threshold for the software. As the system's architecture emerges, identify which component (or subsystem) each safety requirement applies to in the system, recognizing that in some cases multiple components may need to meet a requirement collectively (and possibly a derived requirement would then be specified for each component). Review the requirements, identifying which ones are safety critical and which ones are not, and which are the most important. The requirements that deserve the most attention deal with incidents that are more likely to happen or that have the most catastrophic effects. For example, for a fly-by-wire aircraft, you care about the effect a coffee pot has on the electrical system, but you don't care to the same extent that you do about the flight-control software. The latter will require many times more resources and attention than the former. Priorities should be set with stakeholders who may be able to better assess the probability of failure (technologists and end users) and the impact of failure (end users and other stakeholders) in the context of particular missions. One key to not only specifying but also prioritizing requirements is therefore knowing who your stakeholders are and determining how, when, and why you will engage them during the project. Typically, the result of prioritization is a set of requirements with associated criticality levels. You'll have requirements such as "the system must operate with some minimal functionality for some period of time" and "the system needs to be ready to take over so that if some component fails, it can fail safely with probability nine 9s (i.e., 1.0 - 10^(-9))." It is often beneficial to explore alternatives in the allocation of requirements to components because alternatives may offer superior cost/feature tradeoffs (especially when alternative architectures are also considered—see Practice 4, which will appear in Part 2 of this blog posting). Such exploration should also be considered for achieving fail-safe operation. For example, some alternatives may explore use of redundancy. You are unlikely to obtain the set of requirements right the first time, so expect some iteration through the requirements and adjustment to the allocation of requirements, especially as the architecture, priorities, and tradeoffs emerge or become better understood. 3. Conduct hazard and static analyses to guide architectural and design decisions. Apply static analyses to the specification of the system (including mission threads, quality attribute scenarios, requirements, architecture, and partial implementation), or to models derived from those specifications, to help determine what can go wrong and, if something can go wrong, how to mitigate it. The analyses result in "design points" for the components that must be safety critical. In our infusion pump example, at first glance, the design seems pretty simple. Among other things, you need a pump, something to control the rate of the motor, and a keypad for someone to enter the dosage and frequency. But when you consider manufacturing for a large market, you need to carefully consider what can go wrong and document situations that you will need to address. Note that such considerations might not have been part of the original infusion-pump concept. For example, embolisms can result when air bubbles beyond a certain size enter the patient. To protect the patient from air getting in the line of the infusion pump, you will need to design certain components of the system to prevent that from happening and other components to detect it if it does happen. From a hardware standpoint, you'll need some kind of sensor that detects air bubbles of a certain size. From a software standpoint, if an air bubble is detected, the pump will need to shut down and raise an alarm (while shutting down the pump may be harmful, an embolism is generally worse). Likewise, you'll need to ensure that these actions take place, which means you'll need redundancy or some other fault-tolerance technique to make sure that these actions happen. More generally, the development of SC systems must address several operational challenges, among them how to deal with system failure. This challenge in turn means that the system must monitor its operation to detect when a fault is going to occur (or is occurring), signal that failure is imminent or in process, and then ensure that it fails in the right way (e.g., through fault-tolerant design techniques). Depending on the degree of criticality, you might need a lot of redundancy in both the hardware and software to ensure that at the very least, the fail-safe portion of the system runs. Another approach is to implement the SC system as a state machine in which once the device reaches a failed state, it automatically transitions to a safe state (albeit not necessarily an operational state).Returning to our fly-by-wire example, both redundancy and failing to a safe state have been utilized. Such fly-by-wire aircraft systems have been designed with fourfold redundancy, which requires monitoring and voting logic to resolve disagreement among duplicated subsystems  automatic reversion to manual and mechanical backup controls, as in the Tornado airplane A rich taxonomy of architectural and design tactics have been developed over the years to help in detecting, recovering from, and preventing faults. Some static analysis methods, such as hazard analysis and failure mode and effect analysis (FMEA), have been around for decades and provide broad and proven approaches to assessing system reliability. There are several forms of FMEA, but they all undertake a systematic approach to identifying failures, their root causes, mitigations for selected root causes, and the kinds of monitoring required to detect failures and track their mitigation. The result of an FMEA can engender the need for additional design, such as to add a sensor to help identify an indication of failure or progress in its mitigation, followed by another iteration of FMEA to recalculate the risk exposure and new risk priorities, and so on. The result of a hazard analysis is a characterization of the risks and mitigations associated with high-priority hazards, including likelihood, severity of impact, how hazard will be detected, and how it will be prevented. Other analysis methods focus on how the system responds in situations of resource contention or communication corruption. These include timing studies (can critical task deadlines be met?) and scheduling analyses (e.g., to eliminate priority inversion, deadlock, and livelock). Such resource contention problems were largely solved years ago for simple processor and memory configurations, and the solutions have been progressively extended to deal with distributed systems, multilayer cache, and other complexities in hardware configuration. In our infusion-pump example, specifying the device in an appropriate formal language will allow timing studies of SC requirements to be conducted. For example, a timing study could investigate whether the air-bubble monitoring process will be able to execute frequently and consistently enough (perhaps as a function of motor speed) to ensure adequate time to shut down the pump. Some of these analyses may involve creating or generating proofs that certain components or configurations of components can achieve certain properties, using theorem provers. These high-confidence software and systems analysis techniques are particularly critical for very high-risk requirements and components. In Practices 4 and 5, we will have more to say about static analyses and will note some limits to what can currently be achieved with their use. In Practice 8, we will see that the hazard and static analyses and formal proofs described in Practice 3 feed into the development of the safety case. Looking Ahead Technology transition is a key part of the SEI’s mission and a guiding principle in our role as a federally funded research and development center.  The next post will in this series will present the remaining recommended practices for developing safety-critical systems. These practices are certainly not complete—they are a work in progress. We welcome your comments and suggestions on this series in the comments section below.Additional Resources To view the complete post on the CSIAC website, which includes a detailed list of resources for developing safety-critical systems, please visithttps://www.csiac.org/spruce/resources/ref_documents/safety-critical-sc-systems-spruce-sei.  
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:09pm</span>
It’s that time of year again! 30,000 of the smartest people in HR will be heading to Sin City to talk employment policy, certification process, succession planning, performance management, employee engagement, hiring, firing and a whole lotta generational stereotyping. Indeed friends, #SHRM15 is upon us! The 2015 SHRM Annual Conference & Exposition is a superb opportunity to learn and network, but what if you aren’t keen on networking? Some of us would prefer to take in sessions, receive recertification credits and be left alone. You’ll see people in the deepest corridors of...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:09pm</span>
One of the cool new features of SharePoint 2013 is this concept of SharePoint Apps. Developers that are familiar with SharePoint 2010 will compare this with a Sandbox Solution. But that doesnt do it justice, Sandboxes are for children to play in, SharePoint Apps are for fully grown developers to develop business applications and sell them (licensing is starting to be defined). Just in the few MSDN articles that have been released about them I see stark differences. For starters SharePoint Apps can cross site collections within the same "tenant" (a new SharePoint grouping that I will cover in a future article), this was a huge limitation of a sandbox solutions. Just think of what doors this opens up now, this means that all the HR site collections that are out in a farm can now talk amongst themselves if they are listed in the same tenant. Another difference is the ability to have permissions, this means the business power user who downloads and activates an App from the App store can specify the exact permissions that app can have. No longer do you have to blindly give all 3rd party webparts complete control in your site collections. Here is an excerpt from MSDN about the app permissions (http://msdn.microsoft.com/en-us/library/office/apps/fp179922(v=office.15)): Apps for SharePoint have permissions just as users and groups do. This enables an app to have a set of permissions that are different from the permissions of the user who is executing the app. Unlike Sandbox solutions, Apps can now define their entire user interface. This means you are no longer limited to developing a webpart and hosting it inside a webpart page just to do some custom code. Think of how many times people make the comment "I want it to not look like SharePoint", well now you don’t need to put your app in SharePoint and then brand SharePoint you can use an App hosted externally and get all your SharePoint lists, libraries, and context by default. Here are the UX options you have with SharePoint Apps, and here are some guidelines when styling your App: http://msdn.microsoft.com/en-us/library/jj220046(v=office.15) There is so much more to come as I continue to fumble around in the 2013 beta preview, so stay tuned.
Netwoven   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:09pm</span>
By Kevin Fall Deputy Director, Research, and CTOSEI This is the second installment of two blog posts highlighting recommended practices for developing safety-critical systems that was originally published on the Cyber Security & Information Systems Information Analysis Center (CSIAC) website. The first post in the series by Peter Feiler, Julien Delange, and Charles Weinstock explored challenges to developing safety critical systems and presented the first three practices: Use quality attribute scenarios and mission-tread analyses to identify safety-critical requirements. Specify safety-critical requirements, and prioritize them. Conduct hazard and static analyses to guide architectural and design decisions. This post presents the remaining five best technical best practices. Safety-Critical (SC) Systems—SPRUCE/SEIhttps://www.csiac.org/spruce/resources/ref_documents/safety-critical-sc-systems-spruce-sei4. Specify the architecture incrementally, in a formal notation.As with requirements, architectures are often specified incrementally, as new insights and risks emerge. These architectures are then communicated to developers and suppliers to align them with the selected design and implementation paths. Components with SC requirements should ideally be specified in a formal language with well-defined semantics to support rigorous model checking and theorem proving. Such notations enable evaluating the specification and predicting the component's behavior when it is successfully implemented.When the risk of making the wrong architecture decision is high, it may be necessary to consider multiple architectures and co-develop one or more of these architectures with suppliers (when there are suppliers). Appropriate stakeholders should evaluate the results to select an architecture, or multiple architectures, to pursue.We recommend to the extent possible using the same specification language (Practice 6) throughout system development for both system requirements and architecture. This commonality will enable architects and developers to(1) detect defects early (before implementation and testing) through model consistency checking and predictive analyses of operational quality attributes across requirements and solution specifications(2) allow for analysis by formal methods such as model checkers and theorem provers(3) minimize incompatible abstractions, multiple truths, and indeterminate change impactIn our presentation of these practices, we’ve separated the practices for specifying requirements from specifying architecture, but these are not serial activities in which development teams do all of one and then all of the other. Rather, requirements and design interweave and influence each other. A bit more design often yields new derived requirements, which in turn might be addressed by additional design.For example, an infusion pump needs a sensor to ensure that no air bubbles greater than a certain diameter enter the system. The presence of the sensor creates an additional point of failure, which needs to be addressed through further design. So a second sensor or other forms of redundancy are added, each of which has its own derived requirements, and the process continues.5. Sustain a virtual integration of the software through multiple levels of its specification. Applying virtual integration helps uncover issues with proposed technical solutions (candidate architectures and their implementations) before an expensive commitment to those solutions is required. Virtual integration is characterized by an "integrate then build" mindset, as opposed to the more common "build then integrate" mindset. By pursuing a virtual integration, architects can analyze and identify potential system issues so that engineers can correct their design immediately. This approach reduces development cost and avoids late delivery.In terms of current best practice, however, virtual integration is a concept not yet fully realized, except in separate, well-studied domains. It is possible to derive various domain-specific models from the specifications and subject them to domain-specific analyses, including evaluation by domain experts. But the current underlying meta-models are not yet fully semantically relatable; that is, they do not translate without loss or inconsistency in underlying semantics. These inconsistencies introduce subtly different meanings for the requirements or design. Practice 4 referred to these differences as "incompatible abstractions, multiple truths, and indeterminate change." Such different meanings would potentially invalidate transferring conclusions drawn from the static analyses to the system being developed.Nevertheless, these automated, domain-specific static analyses still offer value. They can help detect many defects early, but with some false positives and false negatives. Such automated analyses become particularly important as the complexity of the software system increases beyond the ability of a single human to comprehend it. We therefore advocate taking a virtual-integration approach when developing SC systems."Full" virtual integration is the goal of the System Architecture Virtual Integration (SAVI) Program, which is intended to advance the state of the art in virtual integration of complex systems. According to the SAVI website, SAVI aims to produce a new way of doing system integration—a virtual integration process (VIP) Models are the basis for the VIP The primary goal is to reduce the number of defects found during physical and logical system integration [resulting in] lower cost, less time Integration starts with conceptualization. "Integrate then build": Move integration forward, get it right sooner, and then keep it right as changes inevitably occur. The SAVI Program is maturing best state of the art into best practice through a number of pilot projects and transition activities. It is expected to reach best practice maturity in 2016 or 2017, according to the website. At present, we advocate pursuing virtual integration to the maximum practical extent, covering those domains posing the highest risk to the success of a project with the technology and skills available to the project.6. Use Architecture Analysis and Design Language (AADL) to formally specify requirements and architecture.The specification will need to cover interactions with the operational environment, the hardware on which the software will operate, the architecture for the software, and initial implementations for some components from a component or reuse library. Specification should also include agreements and derived requirements for components to be provided by suppliers.While other architecture definition languages have various strengths, we recommend AADL for these reasons: It has a formal definition with well-defined semantics for both software and hardware concerns. It supports specification and analysis of several quality attributes, including performance and safety. It has been proven through almost a decade of use since Version 1 in 2004. It is extensible through addition of other domains and associated static analyses. It has support from a broad community, including tools such as OSATE. The use of AADL for discovering development issues (such as safety, performance and integration) has been demonstrated in several research projects, such as SAVI. SAVI uses AADL for specifying the architecture and the main components, and AADL is the main backbone language used by SAVI. It addresses the ongoing safety-critical software development challenges by discovering issues at the earliest possible opportunity, by virtually integrating software and hardware components. 7. Monitor implementation, integration, and testing. If we’re lucky to work in a well-integrated set of mature domains, we may be able to generate all of the code from our detailed architectural specifications, perhaps through parameterization of prebuilt architectural patterns and associated code. Otherwise, and more likely, we will have to build some of the code. While the previous practices (especially Practices 3-5) have helped establish an architecture that can meet timing and other nonfunctional, safety-related requirements, implementation must proceed carefully to ensure that an architecturally conformant implementation results and that the integration proceeds smoothly. There may be some surprises. As mentioned, much of the implementation may be automatically generated from the AADL specification, which is possible in some cases, particularly when using predefined architectural templates developed for this purpose. When reusing code developed for another purpose, be alert to the possibility that the assumptions made during its initial implementation—not all of them documented—may not be appropriate in the new operational context in which it will operate. It is also necessary to carefully test the fail-safe parts of SC systems. Perhaps in part due to the general optimism with which humans approach projects and tasks, the tendency is to cover only scenarios based on the anticipated normal use, but then you and your users risk discovering that the system doesn't behave as intended during failure or restoration of system service. "Cause of Telephone System Breakdowns Eludes Investigators" and "The AT&T Crash" provide examples in which a system didn’t behave as intended during failure. An example of reusing code developed for one context and placing it in another is the Ariane 5 catastrophe, which fortunately had satellites and not humans as payload. For high-risk SC requirements and components, you might use high-fidelity models of the component annotated with formal assertions developed in a formal specification language combined with AADL (Practice 6) to specify the required behavior for that component. Then you can employ theorem proving or model checking to verify that the component’s code does in fact satisfy its specification. When architecting a system that has suppliers or draws upon code from external sources, additional care is needed to negotiate with suppliers and evaluate sources based on an understanding of the product and process capability within the appropriate product domains, relative to the project’s SC needs. Many of these activities require diligent communication with stakeholders, both in the operational environment and in the supply chain, to set expectations and ensure understanding of relevant SC requirements and the architectural, implementation, and verification approaches that will be used to address them. 8. Prepare a safety case for certification concurrent with developing the system. The question the manager responsible for developing SC software will ask is "How can I be sure that everything reasonable is being done to ensure that the developed system will behave safely in operation?" External stakeholders—in particular, regulatory agencies—will need to be sure of this too.Products that have the potential for being unsafe must go through certification or some other sort of regulatory process before being sold. Such requirements vary according to the product being built. Flight-control software in the USA is subject to Federal Aviation Administration (FAA) regulations or the non-U.S. equivalent. For an infusion pump in the USA, the Food and Drug Administration (FDA) establishes requirements. Likewise, for software to shut down a nuclear reactor, it is the Nuclear Regulatory Commission in the USA. In all three cases, software suppliers must submit documentation of what they’ve done to address safety as part of the request for certification.Apart from considering what it will take to achieve certification, your organization will not want to confront the liability arising from a catastrophe due to some oversight in how the system was designed, implemented, verified, and validated (and if applicable, manufactured). Typically, some sub-organization, perhaps Quality Assurance (QA), will take on the role of ensuring management that due diligence was (is being) taken in product development, but when it comes to systemic, critical quality attributes such as safety, taking due diligence is the responsibility of everyone involved in the development of the product. A compelling case must therefore be prepared for both internal and external stakeholders to show that the project has done all that is reasonable to ensure that the developed system will be safe in routine operation, when under stress, and if components fail. This case will reflect many of the early and ongoing considerations of a project seeking to mitigate risks to human safety.Toward this end, projects should develop an assurance case for safety, also called a "safety case." A safety case is an argument supported by evidence that the system meets its claim of safety. It provides justified confidence that the system will function as intended (with respect to the safety claim) in its environment of use. The evidence in a safety case could include process arguments, formal analysis, simulation, testing, and hazard analysis—in affect all of the techniques previously discussed. The case becomes a reviewable artifact to make development, maintenance, and evaluation significantly more effective. Special attention should be given to the fact that as SC systems become more software-reliant we rely less on failure probabilities of physical components. Software defects are design failures that will occur with probability of 1 every time the software is executed. We therefore should consider analytic redundancy approaches to mitigate such failures [reference to Lui Sha’s work on Simplex/Analytic redundancy].A compelling and thorough safety case must be planned and prepared for at the outset of a project. Indeed, a safety case can be considered an essential "component" of the product that the project will produce. As such, a safety case has requirements, a design, and content that has various functions and must be structured, as well as parts that must be related in various ways to each other and traced to the safety and regulatory requirements themselves. Because a safety case is so tightly wedded to early project design and planning considerations anyway—when the project’s needed processes, methods, tools, and skill sets will be determined—it is both prudent and efficient to begin developing the safety case early alongside the system being developed and using it to guide system development.At the beginning of system development, a safety case will be more abstract, addressing components from the top level of the product hierarchy. For example, "the infusion-pump keyboard will be resistant to errors because its keys have no bounce characteristics and because its human interface has been designed properly." As development proceeds, individual components of this argument will be extended with or supplemented by increasingly detailed arguments supported by evidence. For instance, "the keyboard has no bounce characteristics because the keyboard state is polled with high frequency to disambiguate key presses; and here are results from tests of the bounce characteristics of the selected keyboard."Initially, there will also be a focus on the processes, methods, and tools to use, but these too will get more specific (e.g., in selecting testing methods for evaluating keyboard bounce) as the system and software architecture are refined. As the project progresses, both processes and product portions of the argument will become more granular and more complete and at some point will be represented by specific results from process, method, and tool application. For example, the project manager might initially know that the project is heading toward demonstrating that the keyboard has no bounce but might not know the specific way that will be demonstrated until the keyboard is selected.As a safety case emerges, it provides context for interpreting a particular piece of evidence. For example, when provided with test results of the bounce characteristics of a particular keyboard, the manager or other stakeholders can relate that specific evidence to the broader safety case and evaluate the strengths and weaknesses of the overall argument that "this is safe because... ." As product design proceeds, you will make more decisions (e.g., to add or replicate sensors), have more claims to check, and accumulate more evidence, producing a tree of claims, which is the safety case.Thus, by concurrently developing the safety case as the system is architected and implemented, you help ensure continual attention on high-priority technical requirements and risks. You also produce an organized tree of claims linking SC requirements and architectural decisions to claims and evidence of those claims. Here, "evidence" means the results of static analyses from Practice 3, testing from Practice 7, formal methods, and process-capability arguments. For instance, "a log of many years usage has shown that this subcomponent executing in similar circumstances has not experienced a problem." Traditionally, when the device description, hazard analysis, results of testing, and other documents are filed with the appropriate regulatory agency (the FDA, in the case of the infusion pumps), there is a statutorily limited time period during which the agency reviews the documentation and makes a decision. This can be a daunting, even impossible challenge for reviewers. Typically, the reviewer probes specific claims in the documentation to assess how it supports safety rather than trying to assess every claim and all evidence.Specifically in the case of infusion pumps, the FDA has introduced a requirement that vendors include a safety case as part of their submission with the intent of eventually extending it to cover all software-reliant medical devices. The benefit for reviewers—not just at the FDA but also stakeholders inside the company and in the supply chain—is that a safety case provides a description of the product, claims about it, and a body of evidence, as well as the argument linking these together. Thus, if a reviewer has a question about particular hazards, design decisions, claims about them, or evidence, he or she can more easily relate each of these to the others and the rest of the safety case and arrive more readily at an evaluation of individual claims and the quality of the overall argument. This enables the reviewer to make more strategic use of limited review time and more rapidly identify inadequately mitigated risks for appropriate follow up."Towards an Assurance Case Practice for Medical Devices" provides a partial example of a safety case for infusion pumps. In the case of aviation, the UK MoD Defence Standard 00-56 has been requiring that a safety case be part of a vendor’s submission for a range of defense aircraft types since 2007. QinetiQ has developed an example military safety case. Looking Ahead Technology transition is a key part of the SEI’s mission and a guiding principle in our role as a federally funded research and development center.  The next post in this series will present recommended practices for enabling agility at scale. We welcome your comments and suggestions on this series in the comments section below. Additional Resources To view the complete post on the CSIAC website, which includes a detailed list of resources for developing safety-critical systems, please visithttps://www.csiac.org/spruce/resources/ref_documents/safety-critical-sc-systems-spruce-sei.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:09pm</span>
On June 3, @shrmnextchat chatted with the #SHRM15 bloggers in an hour filled with the best tips and advice for attending the 2015 SHRM Annual Conference & Exposition.   In case you missed this amazing chat filled with great information, you can see all the tweets here:   [View the story "#Nextchat RECAP: #SHRM15 Bloggerpalooza " on Storify]  ...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:08pm</span>
By Hasan YasarTechnical ManagerCyber Engineering Solutions Group In late 2014, the SEI blog introduced a biweekly series of blog posts offering guidelines, practical advice, and tutorials for organizations seeking to adopt DevOps. These posts are aimed at the ever-increasing number of organizations adopting DevOps (up 26 percent since 2011). According to recent research, those organizations ship code 30 times faster. Despite the obvious benefits of DevOps, many organizations hesitate to embrace DevOps, which requires a shifting mindset and cultural and technical requirements that prove challenging in siloed organizations. Given these barriers, posts by CERT researchers have focused on case studies of successful DevOps implementations at Amazon and Netflix, as well as tutorials on popular DevOps technologies such as Fabric, Ansible, and Docker. This post presents the 10 most popular DevOps posts (based on number of visits) over the last six months. 1. DevOps Technologies: Fabric or Ansible In the blog post DevOps Technologies: Fabric or Ansible, CERT researcher Tim Palko highlights use cases associated with the DevOps deployment process, including evaluating resource requirements, designing a production system, provisioning and configuring production servers, and pushing code to name a few. Here is an excerpt: The workflow of deploying code is almost as old as code itself. There are many use cases associated with the deployment process, including evaluating resource requirements, designing a production system, provisioning and configuring production servers, and pushing code to name a few. In this blog post I focus on a use case for configuring a remote server with the packages and software necessary to execute your code. This use case is supported by many different and competing technologies, such as Chef, Puppet, Fabric, Ansible, Salt, and Foreman, which are just a few of which you are likely to have heard on the path to automation in DevOps. All these technologies have free offerings, leave you with scripts to commit to your repository, and get the job done. This post explores Fabric and Ansible in more depth. To learn more about other infrastructure-as-code solutions, check out Joe Yankel's blog post on Docker or my post on Vagrant. One difference between Fabric and Ansible is that while Fabric will get you results in minutes, Ansible requires a bit more effort to understand. Ansible is generally much more powerful since it provides much deeper and more complex semantics for modeling multi-tier infrastructure, such as those with arrays of web and database hosts. From an operator’s perspective, Fabric has a more literal and basic API and uses Python for authoring, while Ansible consumes YAML and provides a richness in its behavior (which I discuss later in this post). We'll walk through examples of both in this posting. To read the complete post DevOps Technologies: Fabric or Ansible please visithttp://blog.sei.cmu.edu/post.cfm/devops-technologies-fabric-or-ansible. 2. DevOps and Docker3. Development with Docker Docker is quite the buzz in the DevOps community these days, and for good reason. Docker containers provide the tools to develop and deploy software applications in a controlled, isolated, flexible, and highly portable infrastructure. In the post DevOps and Docker, CERT researcher Joe Yankel introduces Docker as a tool to develop and deploy software applications with substantial benefits to scalability, resource efficiency, and resiliency. Here is an excerpt: Linux container technology (LXC), which provides the foundation that Docker is built upon, is not a new idea. LXC has been in the linux kernel since version 2.6.24, when Control Groups (or cgroups) were officially integrated. Cgroups were actually being used by Google as early as 2006, since Google has always been looking for ways to isolate resources running on shared hardware. In fact, Google acknowledges firing up over 2 billion containers a week and has released its own version of LXC containers called lmctfy, or "Let Me Contain That For You." Unfortunately, none of this technology has been easy to adopt until Docker came along and simplified container technology, making it easier to utilize. Before Docker, developers had a hard time accessing, implementing, or even understanding LXC let alone its advantages over hypervisors. DotCloud founder and current Docker chief technology officer Solomon Hykes was on to something really big when he began the Docker project and released it to the world as open source in March 2013. Docker's ease of use is due to its high-level API and documentation, which enabled the DevOps community to dive in full force and create tutorials, official containerized applications, and many additional technologies. By lowering the barrier to entry for container technology, Docker has changed the way developers share, test, and deploy applications. In the post Development with Docker, Yankel offers a tutorial on how to get started developing software with Docker in a common software development environment by launching a database container (MongoDB), a web service container (a Python Bottle app), and configuring them to communicate forming a functional multi-container application. Here is an excerpt: If you haven’t learned the basics of Docker yet, you should go ahead and try out their official tutorial here before continuing. To get started, you need to have a virtual machine or other host that is compatible with Docker. Follow the instructions below to create the source files necessary for the demo. For convenience, download all source files from our github repository and skip to the demo section. Our source contains a Vagrant configuration file that allows you to run the demo in an environment that will work. See our introductory post about Vagrant here. To read the complete post, DevOps and Docker, please visithttp://blog.sei.cmu.edu/post.cfm/devops-docker-015. To read the complete post Development with Docker, please visithttp://blog.sei.cmu.edu/post.cfm/development-with-docker. 4. DevOps Case Study: Amazon AWS Regular readers of the DevOps blog will recognize a recurring theme in this series: DevOps is fundamentally about reinforcing desired quality attributes through carefully constructed organizational process, communication, and workflow. By studying well-known tech companies and their techniques for managing software engineering and sustainment, our series of posts can gain valuable real-world examples for software engineering approaches and associated outcomes. These case studies also serve as excellent case studies for DevOps practitioners. In the post DevOps Case Study: Amazon AWS, C. Aaron Cois explores Amazon’s experience with DevOps. Here is an excerpt: Amazon is one of the most prolific tech companies today. Amazon transformed itself in 2006 from an online retailer to a tech giant and pioneer in the cloud space with the release of Amazon Web Services (AWS), a widely used on-demand Infrastructure as a Service (IaaS) offering. Amazon accepted a lot of risk with AWS. By developing one of the first massive public cloud services, they accepted that many of the challenges would be unknown, and many of the solutions unproven. To learn from Amazon’s success we need to ask the right questions. What steps did Amazon take to minimize this inherently risky venture? How did Amazon engineers define their process to ensure quality? Luckily, some insight into these questions was made available when Google engineer Steve Yegge (a former Amazon engineer) accidentally made public an internal memo outlining his impression of Google’s failings (and Amazon’s successes) at platform engineering. This memo (which Yegge has specifically allowed to remain online) outlines a specific decision that illustrates CEO Jeff Bezos’s understanding of the underlying tenets of what we now call DevOps, as well as his dedication to what I will claim are the primary quality attributes of the AWS platform: interoperability, availability, reliability, and security. To read the complete post DevOps Case Study: Amazon AWS, please visithttp://blog.sei.cmu.edu/post.cfm/devops-casestudy-amazon-aws-036. 5. DevOps Case Study: Netflix and the Chaos Monkey While DevOps is often approached through practices such as Agile development, automation, and continuous delivery, the spirit of DevOps can be applied in many ways. In this blog post, C. Aaron Cois examines another seminal case study of DevOps thinking applied in a somewhat out-of-the-box way by Netflix. Here is an excerpt: Netflix is a fantastic case study for DevOps because their software-engineering process shows a fundamental understanding of DevOps thinking and a focus on quality attributes through automation-assisted process. Recall, DevOps practitioners espouse a driven focus on quality attributes to meet business needs, leveraging automated processes to achieve consistency and efficiency. Netflix’s streaming service is a large distributed system hosted on Amazon Web Services (AWS). Since there are so many components that have to work together to provide reliable video streams to customers across a wide range of devices, Netflix engineers needed to focus heavily on the quality attributes of reliability and robustness for both server- and client-side components. In short, they concluded that the only way to be comfortable handling failure is to constantly practice failing. To achieve the desired level of confidence and quality, in true DevOps style, Netflix engineers set about automating failure. To read the complete post DevOps Case Study: Netflix and the Chaos Monkey, please visithttp://blog.sei.cmu.edu/post.cfm/devops-case-study-netflix-and-the-chaos-monkey. 6. DevOps and Agile Melvin Conway, an eminent computer scientist and programmer, coined Conway’s Law, which states: Organizations that design systems are constrained to produce designs which are copies of the communication structures of these organizations. Thus, a company with front-end, back-end, and database teams might lean heavily towards three-tier architectures. The structure of the application developed will be determined, in large part, by the communication structure of the organization developing it. In short, form is a product of communication. In the post DevOps and Agile, C. Aaron Cois looks at the fundamental concept of Conway’s Law applied to the organization itself. Here is an excerpt: The traditional-but-insufficient waterfall development process has defined a specific communication structure for our application: Developers hand off to the quality assurance (QA) team for testing, QA hands off to the operations (Ops) team for deployment. The communication defined by this non-Agile process reinforces our flawed organizational structures, uncovering another example of Conway’s Law: Organizational structure is a product of process. To read the complete post DevOps and Agile, please visithttps://blog.sei.cmu.edu/post.cfm/devops-agile-317.  7. ChatOps in the DevOps Team Conversations between key stake holders of a project team (e.g., developers, business analyst, project manager, and security team)  and the platform on which communication occurs can have a profound impact on that collaboration. Poor or unused communication tools lead to miscommunication, redundant efforts, or faulty implementations. On the other hand, communication tools integrated with the development and operational infrastructures can speed up the delivery of business value to the organization. How a team structures the infrastructure on which they communicate will directly impact their effectiveness as a team. In the post ChatOps in the DevOps Team, CERT researcher Todd Waits introduces ChatOps, a branch of DevOps that focuses on communications within the DevOps team. The ChatOps space encompasses the communication and collaboration tools within the team: notifications, chat servers, bots, issue tracking systems, etc. Here is an excerpt: In a recent blog post, Eric Sigler writes that ChatOps, a term that originated at GitHub, is all about conversation-driven development. "By bringing your tools into your conversations and using a chat bot modified to work with key plugins and scripts, teams can automate tasks and collaborate, working better, cheaper and faster," Sigler writes. Most teams have some level of collaboration on a chat server. The chat server can act as a town square for the broader development teams, facilitating cohesion and providing a space for team members to do everything from blowing off steam with gif parties to discussing potential solutions to real problems. We want all team members on the chat server. In our team, to filter out the noise of a general chat room, we also create dedicated rooms for each project where the project team members can talk about project details that do not involve the broader team. More than a simple medium, the chat server can be made intelligent, passing notifications from the development infrastructure to the team, and executing commands back to the infrastructure from the team. Our chat server is the hub for notifications and quick interactions with our development infrastructure. Project teams are notified through the chat server (among other methods) of any build status they care to follow: build failures, build success, timeouts, etc. To read the complete post ChatOps in the DevOps Team, please visithttp://blog.sei.cmu.edu/post.cfm/chatops-in-devops-team-029. 8. DevOps Technologies: VagrantEnvironment parity is the ideal state where the various environments in which code is executed behave equivalently. The lack of environment parity is one of the more frustrating and tenacious aspects of software development. Deployments and development both fall victim to this pitfall too often, reducing stability, predictability, and productivity. When parity is not achieved, environments behave differently, which makes troubleshooting hard and can make collaboration seem impossible. This lack of parity is a burden for too many developers and operational staff.  In the blog post DevOps Technologies: Vagrant, CERT researcher Tim Palko describes Vagrant, which is a developer's tool that provides a virtualized and provisioned environment to developers using operations tools with a single, declarative script and a simple command-line interface. Vagrant increases development and environment parity by using the same preconfigured (scripted) environment across all developers or in production. Vagrant eliminates the "it works on machine" excuse in application development lifecycle Here is an excerpt: The job of an operations team often involves implementing full parity across deployment environments, such as those used for testing, staging, and production. Conversely, the development team is almost entirely responsible for provisioning development machines. To achieve 100 percent parity between both sets of environments, both teams must speak the same language and use the same resources. Chef and Puppet, both crafted for the operations role, are just slightly out of reach for a busy developer. Each has a respectable learning curve, and neither really solves the parity problem completely: developers still need to virtualize the correct production target platform. All this additional work incurs a decent amount of overhead when you just want to write code! This is where Vagrant comes in. Vagrant is a developer's tool that basically serves up a virtualized and provisioned environment to developers using operations tools with a single, declarative script and a simple command-line interface. Vagrant cuts out the grunt work needed to stand up a virtual machine (VM) and it removes the need to configure or run, for example, chef-server and chef-client. Vagrant hides all of this and leaves the developer with a simple script, an extensionless file named Vagrantfile, which can be checked into source control along with the code. To read the complete post DevOps Technologies: Vagrant, please visithttps://blog.sei.cmu.edu/post.cfm/devops-technologies-vagrant-345. 9. Addressing the Detrimental Effects of Context Switching with DevOps In a computing system, a context switch occurs when an operating system stores the state of an application thread before stopping the thread and restoring the state of a different (previously stopped) thread so its execution can resume. The overhead incurred by a context switch managing the process of storing and restoring state negatively impacts operating system and application performance. In the blog post Addressing the Detrimental Effects of Context Switching with DevOps, CERT researcher Todd Waits describes how DevOps ameliorates the negative impacts that "context switching" between projects can have on a software engineering team’s performance. Here is an excerpt: In the book Quality Software Management: Systems Thinking, Gerald Weinberg discusses how the concept of context switching applies to an engineering team. From a human workforce perspective, context switching is the process of stopping work in one project and picking it back up after performing a different task on a different project. Just like computing systems, human team members often incur overhead when context switching between multiple projects. Context switching most commonly occurs when team members are assigned to multiple projects. The rationale behind the practice of context switching is that it is logistically simpler to allocate team members across projects than trying to have dedicated resources on each project. It seems reasonable to assume that splitting a person’s effort between two projects yields 50 percent effort on each project. Moreover, if a team member is dedicated to a single project, that team member will be idle if that project is waiting for something to occur, such as completing paperwork, reviews, etc. Using our computing system metaphor, this switching between tasks is similar to the concept of multi-threading, where if one thread blocks the process for some reason, other threads can perform other work, rather than waiting for the first thread to unblock. If all work was assigned only to the first thread, progress is much slower. While multi-threading may be sound reasoning in computing systems, the problem is that human workers don’t always get a nice 50-50 effort distribution. Effort is thus lost to context switching, and productivity may drop precipitously as the worker’s effort is spread across more projects. To read the complete post Addressing the Detrimental Effects of Context Switching with DevOps, please visithttp://blog.sei.cmu.edu/post.cfm/addressing-detrimental-effects-context-switching-devops-064. 10. What is DevOps? Typically, when we envision DevOps implemented in an organization, we imagine a well-oiled machine that automates infrastructure provisioning code testing application deployment Ultimately, these practices are a result of applying DevOps methods and tools. DevOps works for all sizes, from a team of one to an enterprise organization. In the post, What is Devops, CERT researcher Todd Waits presents the foundations of DevOps.  DevOps can be seen as an extension of Agile methods. It requires all the knowledge and skills necessary to take a project from inception through sustainment to be contained within a dedicated project team. Organizational silos must be broken down. Only then can project risk be effectively mitigated. Here is an excerpt: While DevOps is not, strictly speaking, continuous integration, delivery, or deployment, DevOps practices do enable a team to achieve the level of coordination and understanding necessary to automate infrastructure, testing, and deployment. In particular, DevOps provides organizations a way to ensure collaboration between project team roles infrastructure as code automation of tasks, processes, and workflows monitoring of applications and infrastructure Business value drives DevOps development. Without a DevOps mindset, organizations often find their operations, development, and testing teams working toward short-sighted incentives of creating their infrastructure, test suites, or product increment. Once an organization breaks down the silos and integrates these areas of expertise, it can focus on working together toward the common, fundamental goal of delivering business value. Well-organized teams will find (or create) tools and techniques to enable DevOps practices in their organizations. Every organization is different and has different needs that must be met. The crux of DevOps, though, is not a killer tool or script, but a culture of collaboration and an ultimate commitment to deliver value. To read the complete post What is DevOps, please visithttps://blog.sei.cmu.edu/post.cfm/what-is-devops-324. Every two weeks, the SEI will publish a new blog post that offers guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits please click here. To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here. To listen to the podcast DevOps—Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here. To read all of the blog posts in our DevOps series, please click here.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:08pm</span>
          Another very cool new feature of SharePoint 2013 is its Request Management capabilities. This pairs up an old feature in 2010 called "Resource Throttling" with a new feature called Routing Rules. These Resource Rules allow you to redirect traffic coming to your farm based on the following properties: Url UrlReferrer UserAgent Host IP HttpMethod SoapAction CustomHeader The operators that you can use on these rules are: StartsWith, EndsWith, Equals, and even RegEx. This is a very similar function that you might already have in your application load balancers (F5, Cisco). So what does this mean to my SharePoint Farms? Well Request Manager allows you to now direct traffic to Web Front Ends that are tailored to the type of requests. An example of this is that one of my clients makes heavy use of the SharePoint Client Object Model, so there are tons of web service calls. For that scenario we could setup a SharePoint WFE that has IIS settings tuned for web services. Another example I have faced in the wild is a department that want faster responses, or dedicated resources. So in the scenario where an entire company shares one SharePoint Farm, departments within the company may want the flexibility to pay for faster responses or dedicated resources. With Request Manager you now can let Marketing buy a server for your farm, add it, and then use Request Manager to direct all marketing site collections (based on URLs) to this pool of WFEs that Marketing bought. Another example would be housing mission critical apps on the same farm as non-critical apps. You can direct all traffic of the mission critical applications to your faster more powerful front ends, and your non critical apps to the older hardware. I think it will be interesting to see some performance metrics (once Microsoft provides them) on how much of a hit your farm takes implementing these rules. My guess is that this is great for small IT shops that have 2 or 3 servers but don’t want to invest in a network application that load balances and directs traffic. But big companies that have network appliances in place will continue using their proven technics. Also keep in mind in a virtualize SharePoint environment it is best practice to offload the software load balancers (MS NLB) to a network appliance for performance issues. So I would assume the same holds true for Request Manager’s routing rules. Last thing I want to mention is a gotcha with RM; if you implement the routing rules and a request does not meet any of them it will be discarded. Yup, it just throws it away. So to make sure this doesn’t happen we suggest implementing a catch all rule, put it at the bottom in Execution Group 2, and have it catch all requests. Read more at the MSDN training for IT Pros: http://technet.microsoft.com/en-us/sharepoint/fp123606.aspx
Netwoven   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:08pm</span>
A towering yellow excavator dug into packed dirt and placed it into the back of a dump truck.  As the truck backed up, it let out a loud, "BEEP... BEEP... BEEP..." and forced an awkward explanation to my co-workers on the other end of the conference call.  I'd spend a good part of my afternoon that day watching the construction site outside of my office. All the while, reminiscing about when I was a kid playing with my trucks and action figures in the dirt. Kids do it all the time, why can't we? Adults who play are shown to...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:08pm</span>
By Jeff BolengPrincipal ResearcherAdvanced Mobile Systems Initiative In their current state, wearable computing devices, such as glasses, watches, or sensors embedded into your clothing, are obtrusive. Jason Hong, associate professor of computer science at Carnegie Mellon University, wrote in a 2014 co-authored article in Pervasive Computing that while wearables gather input from sensors placed optimally on our bodies, they can also be "harder to accommodate due to our social context and requirements to keep them small and lightweight." For soldiers in battle or emergency workers responding to contingencies, seamless interaction with wearable devices is critical. No matter how much hardware soldiers wear or carry, it will be of no benefit if they have to stop what they are doing to interact while responding to enemy fire or another emergency situation. This blog post describes our joint research with CMU’s Human Computer Interaction Institute (HCII) to understand the mission, role, and task of individual dismounted soldiers using context derived from sensors on their mobile devices and bodies to ensure they have the needed information and support. A Model for Context-Aware Computing In beginning this research, we partnered with Dr. Anind Dey, an early pioneer in context-aware computing and director of HCII. Dey wrote one of the seminal papers on contextual computing, "Understanding and Using Context," while completing his doctoral work at Georgia Institute of Technology. At HCII, Dey researches sensors and mobile technology to develop tools and techniques for understanding and modeling human behavior, primarily within the domains of health, automobiles, sustainability, and education. Our collaboration with Dey and other HCII researchers aims to use burgeoning computing capabilities that are either worn on users’ bodies and/or tied to users’ smartphones through the infrastructure of a cloud. We want to ensure this technology works with the user in an unobtrusive way and anticipates the user’s informational and other needs. Helping Soldiers on the Battlefield Through our joint research effort, we developed a framework and a data model that involved codifying a soldier’s role and task within the context of a larger group mission. We then examined data streams available from sensors on smart phones that soldiers use in the field. We augmented our experimentation with other body-worn sensors. Take the example of an explosives disposal technician sent to investigate an unexploded ordnance. As the technician approaches it, his or her smartphone or wearable device, sensing the location, would automatically disable all wireless communications used by the technician, and any of those in use by nearby soldiers, that could trigger the ordnance. Other nearby wearable devices or smartphones that remain a safe distance away may then send notifications to other soldiers in the unit to retreat to a standoff distance. Another scenario might involve a wearable camera. As the technician approaches an explosive device, a wearable camera, such as Google Glass, could conduct object detection and recognition. The camera would then, ideally, provide information, such as type of device, amount of yield, type of fuse, or whether the device is similar to one that had been previously defused. The camera may even provide common disabling techniques without the soldier having to scroll through options or issue commands. Knowing a soldier’s mission, role, and task, our framework incorporates all the sensory data—including audio, video, and motion sensors—and then delivers information that, ideally, is the most appropriate for a soldier in a given scenario. We then extended the framework for a group context because soldiers and first responders almost always work in teams. A group of soldiers or emergency responders have a mission, and, based on that mission, they all have roles they have to fulfill (for instance, radio, gunner, or commander). Based on those roles and that mission, they all have a number of tasks they must complete. We developed a framework and mobile device prototype that would share context among all the handheld devices used by a team working on a mission to help the team coordinate tasks most effectively. Testing Our Framework in the (Paintball) Field Each of the smartphones or wearable devices that we experimented with had, on average, between eight and 12 raw sensor streams tracking temperature barometric pressure location data from GPS received signal strength from various local wireless networks inertial measurement unit (IMU) measured by six-axis devices that not only detail whether a user is moving up, down, left, or right, but which also provide accelerations rates for each plane Next, we designed several scenarios that were representative of small-unit tactics, everything from an ambush to a patrol to a coordinated retreat. We decided to test our scenarios in paintball sessions because the feedback mechanism (the sting of being struck by a paintball) provided enough incentive for the volunteers (who were drawn from the 911th Airlift Wing) to react realistically. At an outdoor paintball course, we attached sensors to the bodies of our volunteers. We then filmed the volunteers in scripted scenarios and recorded the sensory data streams. Relying on activity recognition research, we then took the data from a dozen high-bandwidth (high-sampling-rate) streams for each volunteer and determined, based on those sensor streams, the activity of the volunteer and also the larger context the group was performing. This work is another area in which we are collaborating with Dey. We tested two approaches: Taking the individual sensor streams and recognizing each individual activity (leveraging machine learning), then looking at the activities performed by each volunteer to try to infer a group activity. Taking all the raw sensor feeds from all of the volunteers directly into the machine-learning algorithms. This approach allowed us to jump from raw data to the understanding that, for example, a group of soldiers is under attack or retreating without first exploring the individual context. Currently, we are examining the raw sensory data captured during the paintball exercises and labeling it. By so doing, we are trying to determine, for example, whether an individual was running, retreating, or part of an entire group that was being ambushed and returning fire. With the video capture providing a ground truth of their activity, we run the raw sensory data in every combination we can think of through the machine-learning algorithms and try to determine which sensor streams (and combination of sensor streams) are most accurately able to predict an individual’s activity. For example, in some instances we needed to combine the inertial measurement unit data from the leg and the head to accurately predict the individual’s position. The benefit of using machine-learning algorithms is that they are classifiers and relatively opaque to data. We can combine any set of sensor streams to the classifiers and compare it to the labeled data for accuracy. This comparison allows us to effectively determine which sensor streams are most applicable to recognize each type of individual and group activity. We conducted three paintball exercises in 2014 and, using the vast amount of data we recorded, developed trained models. We then applied these trained models to help recognize individual and group activity. Our objective was to use all those data streams to infer an individual’s activity or physical position and then determine how those inferences relate to the individual’s pre-defined set of roles and tasks. For example, it is not enough to know that someone’s role is "disposal technician"; it is also beneficial to provide particular phases of the mission. For example, is our disposal technician getting suited up? If so, it would be of value to provide background information on the layout of the site that the technician is traveling to. Once a technician arrives at the ordnance, he or she would require different information: What type of device am I looking at? How big should my safety cordon be? How do I dismantle the device? How do I warn people? Am I able to warn people? Results and Challenges We conducted three rounds of paintball exercises, and our results improved with each exercise. In our first two exercises, with certain combinations of the sensors, we were able to identify exaggerated behaviors for an individual (i.e., running or falling) with 90 percent accuracy. While other researchers have conducted similar work in this field, our objective was to recognize group behavior in addition to individual behavior. Our aim is to determine whether a squad is under attack before a member of the squad must spend precious seconds to radio that information back to a command center or supporting forces. Knowing immediately when soldiers in a squad have come under fire will allow the forward operating base to deploy support seconds or even minutes earlier. One challenge we currently face is that models trained for a general population do not perform as accurately for specific individuals. Once we apply a general model to an individual, one of our future challenges is to learn the quirks of a particular person, for example, if an individual runs with a slightly different gait from everyone else in the world. Our objective is onboard continuous learning so that the model becomes personalized to an individual. This personalization will enable us to do highly accurate activity recognition and information delivery. Another challenge that we faced—and we are not the first people to have this challenge—is to simplify the logistics of working with human volunteers. Simply coordinating volunteers, sensors, wireless data gathering, and cameras proved hard. We learned that it worked best to be very specific in our communications: laying out a strict agenda and informing volunteers of the agenda. After the exercises, we also faced massive amounts (hundreds of gigabytes) of data including video and raw sensor data. We wrote several utilities that automate distillation of the data. We also wrote some apps that would pull the data off the cameras automatically during an experiment and archive them on a local hard drives. In addition, we wrote a remote control app for the cameras on the Android phones that were distributed around the paintball field. Looking Ahead Soldiers today carry much of the same gear their predecessors have carried for the last several decades (a radio, water, personal protective gear, ammunition, and weapons). Currently, soldiers pay a price to have computing on the battlefield. The hardware is heavy, the batteries are heavier still, and battery life is not optimal. Right now, the benefits of computing have not been sufficiently beneficial for soldiers and first responders to pay the price to carry the weight and deal with the added complexity. Our work in this area will continue until wearable computers are less intrusive, especially for soldiers, and they bring benefits and an information advantage that clearly offsets the added weight and complexity. We welcome your feedback on our research in the comments section below. Additional Resources To listen to the podcast SEI-HCII Collaboration Explores Context-Aware Computing for Soldiers, which provides additional details on the SEI's research collaboration with Dr. Dey's group, please click here.  To read "Understanding and Using Context" by Anind Dey, please visithttp://www.cc.gatech.edu/fce/ctk/pubs/PeTe5-1.pdf. To learn about the research of the Advanced Mobile Systems Initiative, please visithttp://www.sei.cmu.edu/about/organization/softwaresolutions/mobile-systems.cfm.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:08pm</span>
Key facts: IDC estimates that in 2010 alone we generated enough digital information worldwide to fill a stack of DVDs reaching from the earth to the moon and back. That’s about 1.2 zettabytes, or more than one trillion gigabytes - a 50% increase over 2009. IDC further estimates that from 2011 on, the amount of data produced globally will double every 2 years. Every 2 days we generate more data than we did from the dawn of time through 2003 Worldwide volume of data is growing at 59% per year Between 75% and 85% of data is unstructured In 5 years the majority of analytic data will come from unstructured sources The industry calls this the "Big Data" problem. It is typically a data collection that has grown so big that it has become difficult to handle it using conventional Relational Data Management system or search systems. This includes data that was too voluminous, complex or fast-moving to be of much use before, such as meter or sensor readings, event logs, Web pages, social network content, email messages and multimedia files. As a result of this evolution, the Big Data universe is beginning to yield insights that are changing the way we work. As a result, the world of business intelligence is changing. The user community is demanding new forms of business intelligence applications. Most of these new forms of BI require technologies that have been around for some time, but have been alien for most business intelligence environments until now: search technology. Although search technology and business intelligence technology have both been around for A long time, they have lived separate lives. The business intelligence community hasn’t really adopted search technology nor has search technology seriously entered the business intelligence market. In this article we introduce search based BI applications and its business benefits. What is Search Based Application (SBA)? Imagine an application with which users can look up customer or product information. In a more traditional application the user sees a number of input fields for entering data, such as name and address. The application then tries to find the right customers or products. In most cases, nothing will be found if the entered values are not correct. Search Based Business Intelligence Applications Search based application is a new category of application that enables users to find information from any source and in any format With a search-based application, the user can enter anything he knows about the customer or product, and the search engine will try to find those customers or products that resemble the keywords entered by the user. It’s a more free format search. Search based applications integrate data from various sources and provide a single unified view. Why Search Based Applications? Amazingly fast Highly scalable Low cost Deeper insight What are business benefits of Search Based BI Application? Easy to use interface that end users understand Enables the integration and search of any data source Search Across Multiple Sources Easily integrates structured and unstructured data sources Indexes the sources in Real Time Provided Assisted Navigation To Filter the Search Results there by reducing the time it takes to find information Ability to display results in highly visual and interactive form For additional information on how to create search based applications using FAST search, click here to download our conference material presented by Newoven consultants. About Netwoven This article is written by Niraj Tenany, President and CEO of Netwoven and a Information Management practitioner.  Niraj works with large and medium sized organizations and advises them on Enterprise Content Management and Business Intelligence strategies.  For additional information, please contact Niraj at ntenany@netwoven.com.
Netwoven   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:07pm</span>
  To read the June 2015 SHRM Leading Indicators of National Employment Report, please click here.   ...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:07pm</span>
Displaying 29425 - 29448 of 43689 total records