Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

By Todd WaitsProject LeadCERT Cyber Security Solutions Directorate This post is the latest in a series to help organizations implement DevOps. In a previous post, we defined DevOps as ensuring collaboration and integration of operations and development teams through the shared goal of delivering business value. Typically, when we envision DevOps implemented in an organization, we imagine a well-oiled machine that automates  infrastructure provisioning code testing  application deployment  Ultimately, these practices are a result of applying DevOps methods and tools. DevOps works for all sizes, from a team of one to an enterprise organization. DevOps can be seen as an extension of an Agile methodology. It requires all the knowledge and skills necessary to take a project from inception through sustainment to be contained within a dedicated project team. Organizational silos must be broken down. Only then can project risk be effectively mitigated. While DevOps is not, strictly speaking, continuous integration, delivery, or deployment, DevOps practices do enable a team to achieve the level of coordination and understanding necessary to automate infrastructure, testing, and deployment. In particular, DevOps provides organizations a way to ensure collaboration between project team roles infrastructure as code automation of tasks, processes, and workflows monitoring of applications and infrastructure Business value drives DevOps development. Without a DevOps mindset, organizations often find their operations, development, and testing teams working toward short-sighted incentives of creating their infrastructure, test suites, or product increment. Once an organization breaks down the silos and integrates these areas of expertise, it can focus on working together toward the common, fundamental goal of delivering business value. Well-organized teams will find (or create) tools and techniques to enable DevOps practices in their organizations. Every organization is different and has different needs that must be met. The crux of DevOps, though, is not a killer tool or script, but a culture of collaboration and an ultimate commitment to deliver value. Every Thursday, the SEI will publish a new blog post that offers guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:46pm</span>
We had a lot of interest in our blog about building our Business Design skills at DWP. We’ve recognised that working towards a joined-up business design - a clear description of our future customer proposition and the way we’ll organise ourselves to deliver it - will be critical to realising our transformation ambition. Joining up like this across our people, technology and processes is challenging for the Department, and so we have had to quickly build our capability to design the business. We need great people, and we’ve been developing our own people, recruiting, and hiring on an interim basis. A community across DWP Designers and architects are operating in many different areas around DWP. This year we have formalised within our transformation group a departmental Business Design team, which includes some of our Business Designers, and a Business Architecture Services team. And there are many related roles around the rest of DWP, for example in our technology organisation, major change programmes, and in operations.  All these people need to share an understanding of our transformation ambitions. Business Designers from around DWP But what kind of person are we finding makes a great Business Designer? What do we look for in a Business Designer? Business transformation advocate - Act as an "ambassador" for our transformation journey; tell a compelling story of our future vision which engages our stakeholders. Get feedback and build it into our transformation story. Strategically aware - Understand the unique purpose of DWP, its financial responsibilities, and its strategic goals. Apply strategic thinking and understand the detail. See the potential of business models and digital opportunities from the outside world, and use this knowledge to challenge DWP to deliver even better services. Desiger and developer of content - Bring designs together in our transformation roadmap; understand and shape design deliverables. Use business and technical understanding to align product and technology roadmaps. Understand our change lifecycle. Be able to use a wide range of tools and techniques to arrive at a consistent business designs, with the minimum effort required; recognise when to bring in experts. Problem solver and creative thinker - Blend analytical, critical and creative thinking; help business areas identify and work through difficult design choices and build consensus around solutions; work with different specialisms in multi-disciplinary teams; challenge constrained thinking. Communicator - Engage and relate to different audiences; communicate upwards, downwards and outwards; comfortable working with people at all levels within DWP; explain complex messages in an accessible way. "Do the hard work to make it simple." Flexible - Adapt our tools and frameworks where necessary. "If it works, do it. If it doesn’t, don’t". It’s hard to find great people who can be all these things. But DWP can offer a compelling environment for those who can rise to the challenge. Ultimately we’re looking for designers of great user experiences (which put people at the heart of our services), and designers of operating models (which form the basis of a modern DWP). Our job is to create something new, efiicient and exciting, not just to create a slightly better version of the same organisation. After all, we’re part of the best digital startup in the country!   If you’re interested in joining us, all of our permanent roles are on Civil Service Jobs, and don’t forget to follow @DigitalDWP on Twitter for announcements and updates. Keep in touch by following Andrew @abesford on Twitter.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:45pm</span>
By Eileen WrubelSenior Member of the Technical StaffAcquisition Support Program Tension and disconnects between software and systems engineering functions are not new. Grady Campbell wrote in 2004 that "systems engineering and software engineering need to overcome a conceptual incompatibility (physical versus informational views of a system)" and that systems engineering decisions can create or contribute to software risk if they "prematurely over-constrain software engineering choices" or "inadequately communicate information, including unknowns and uncertainties, needed for effective software engineering." This tension holds true for Department of Defense (DoD) programs as well, which historically decompose systems from the system level down to subsystem behavior and breakdown work for the program based on this decomposition. Hardware-focused views are typically deemed not appropriate for software, and some systems engineers (and most systems engineering standards) have not yet adopted an integrated view of the two disciplines. An integrated view is necessary, however, because in complex software-reliant systems, software components often interact with multiple hardware components at different levels of the system architecture. In this blog post, I describe recently published research conducted by me and other members of the SEI’s Client Technical Solutions Division highlighting interactions on DoD programs between Agile software-development teams and their systems engineering counterparts in the development of software-reliant systems.  Foundations of Our Research  In the last several years, the DoD has focused efforts on decreasing the length of time needed to bring new software and other technical capabilities to soldiers. To accomplish this goal, members of the DoD acquisition community are increasingly turning their attention to Agile and other iterative development methods. The DoD 5000 series and other guidance for acquisition programs still offer a system-oriented perspective on acquisition. However, they do not provide strong guidance on how to leverage iterative software development methods within the greater context of iterative development methods.  With this research, our team—which also includes Suzanne Miller, Mary Ann Lapham, and Timothy A. Chick-does not advocate a particular development method.  Instead, we explore the ways in which Agile software development teams are engaging systems engineers and associated stakeholders to identify factors that will help the DoD benefit from Agile methods and barriers to achieving those results.  As detailed in our technical note on this research, Agile Software Teams: How They Engage with Systems Engineering on DoD Acquisition Programs, two key facets of systems engineering help us to understand why systems engineering is an important player in programs adopting Agile methods:  The product side of systems engineering: Systems engineering has a key role in transforming the artifacts that communicate intent of the system as understanding of the system evolves. The service side of systems engineering: Systems engineering has an equally important role in communicating and coordinating important information about the evolving knowledge of the system among the many stakeholders, including technical staff, end users, and management. Systems engineers have a strong conflict resolution role when inevitable technical and programmatic conflicts arise among stakeholders.  When we analyzed these two facets, what emerged were two distinct possibilities of how the systems engineering community might take advantage of Agile methods.  On the product side, the incremental, iterative approach with heavy user involvement common to all Agile methods could be leveraged to increase the speed of development of key requirement and design artifacts needed to implement different mission or system threads. Some methods, such as test-driven development, could be incorporated into the activities of systems engineering to increase the connection between the two sides of the typical systems engineering V lifecycle.  On the service side, at the scale of a program that requires a separate systems engineering function, the coordination, communication, and conflict-resolution services that systems engineering provides could translate into a product owner surrogate role, a Scrum of Scrums facilitator role, or other specialty roles that show up in scaling approaches, such as the Scaled Agile Framework (SAFe), which provides an interactive knowledge base for implementing agile practices at enterprise scale. Three Different Approaches to Systems Engineering with Agile  At a high level, we envisioned three different approaches, which we refer to as engagement models:  Agile software engineering teams interacting with traditional systems engineering teams operating on that traditional systems engineering V model systems engineers acting as Agile team members systems engineering teams actually applying Agile methods to their own work and systems engineering functions In the first of these approaches—where traditional systems engineering is being used, and the systems engineering team is interacting with an Agile software team without being members of that team—we observed that Agile software engineering teams were providing deliverables to the systems engineering function at the boundary between the systems engineering function and software functions.  These deliverables can include code or documentation and work products to facilitate technical reviews. The Agile team engaged in its Agile practices up to that point, assembled what it needed to hand off to a systems engineering function, and then passed those things over those boundaries. So, the Agile team was free to operate in its iterative and incremental way until it handed everything over. The team then entered the systems engineering domain, and the systems engineering teams executed according to their plans and processes, typically, with the traditional V model. As a result, some of the decisions typically made in an Agile software project—such as the selection high business value or high technical infrastructure value—could not always be made because the programs were bound by the manner in which the systems engineering function had allocated the requirements and defined the work packages. For those teams, we did not always see all the benefit we observed in what I would call a "pure" Agile space because they had to deal with this interaction.  In the second of these approaches, a systems engineer typically operates on a software Agile team in the role of product owner, who is the person in charge of requirements and their prioritization. The systems engineer would be involved in the prioritization of features and functions going into a particular sprint. This involvement enables the systems engineer to follow the testing of the features and functionalities, so that when work products come to the boundary between the software team and the systems engineering function, systems engineers know what is coming into test and evaluation. Just as importantly, the systems engineering team is prepared. There is a smooth transition as the work product flows from one part of the engineering process to the other. With this approach, we observed a smoother transition as well as additional opportunities for making changes in the systems engineering ideas and designs. We attributed these benefits to early learning enabled by the incremental approach and more detailed involvement with software. Some of the software implementation can change some of the system designs, so the software team actually had a little more influence on the systems engineering products when systems engineers worked in this way.  In the third approach, where systems engineers apply Agile methods to their own work, they iterate requirements development, develop the baselines, establish the baselines, and establish the designs throughout the lifecycle.  The biggest issue we observed with this approach is the translation of "working software," a fundamental tenet of Agile methods when applied to software, to an equivalent in systems engineering. We observed this more often in commercial IT organizations, where there is no significant hardware development component. We did, however, observe one large project that adopted Agile systems engineering methods across system, hardware, and software tasks. Although we only found one project, we have seen indicators of increased interest in this approach. The International Council on Systems Engineering (INCOSE) established a working group on Agile and systems engineering. Also, The National Defense Industrial Association has established an Agile and systems engineering working group. While this approach is the most novel, our research team did observe instances in which that dichotomy between product and service common to systems engineering comes in to play. Systems engineering functions that recognize the service side were more prepared, because their vision isn’t limited to product artifact transformation.  Through our surveys and interviews, we observed that the greatest opportunities for successful Agile implementations occurred when systems engineering teams were, at the very least, aware of and engaged with the Agile processes. More generally, we identified successes and challenges in the following key areas:  Automation. Investment in automation can help to harmonize Agile with traditional constructs by streamlining the development of documentation deliverables and communication. Lack of automation frustrates cost- and resource-effective testing and hampers communication and transparency between software and systems engineering teams.  Insight/oversight. The constant, systemic delivery of production-ready code by Agile software teams over short iterations and the use of metrics and tools such as burn-down and cumulative flow diagrams offer up frequent, regular windows to monitor the progress and quality of the system under development, but stakeholders have to understand and actually exercise those insight opportunities to derive value from them.  Training. Teams that provide systems engineers, government program office personnel, and other stakeholders with Agile-specific training on a recurring basis have reported success with communicating and expectation setting. Government program office staff training for the various career fields involved in acquisitions (e.g., contracting, finance), however, is generally functionally oriented: respondents indicated that strict capability-based training for government personnel has hampered the ability of many program office team members to conceptualize the changes in how contracts and delivery orders must be structured to most effectively engage Agile development teams. Role of sponsors, advocates, and coaches. Sponsors, advocates, and coaches for Agile can be instrumental to teams undergoing change to meet operator needs and variations in processes and to ensure that culture can change to support Agile methods, processes, and techniques. When coaches established and delivered training on Agile, systems engineering process and software development became better intertwined, and decisions were made together instead of separately. Leadership advocacy sets expectations for communication and collaboration within and across organizations and provides support that allows Agile practitioners to explore program-specific tailoring of the processes and documents required by acquisition regulations. Pilot programs. Respondents who engaged in piloting before broader rollouts indicated that demonstrated cost and schedule savings and the repeated demonstration of functional code often made believers out of previously skeptical systems engineering teams. They also consistently reported that the demonstration of cost, schedule, quality, insight, or predictability improvements via pilot projects also garnered positive feedback from government program offices, which made it easier to secure leadership buy-in and sponsorship for expanding the use of Agile on future efforts.   Stakeholder involvement. Continuous collaboration with stakeholders is often difficult to achieve in a DoD setting due to factors such as lack of experience and lack of availability due to operations tempo or lack of funding. This collaboration is also challenging due to the variety of stakeholders that may bring conflicting requirements and/or priorities to the table. Certification and accreditation processes were cited as the most particular pain point, due to constantly evolving requirements and the typical "black box" nature of the processes. Defining and evolving requirements. Agile practitioners accept that the unknowns at the inception of a program are a natural and expected phenomenon, rather than treating those uncertainties as weak points.  Agile software practitioners still report a tendency in many acquisition programs to create detailed software requirements at the inception of the program when the system is decomposed and the initial WBS developed. When detailed requirements are placed on contract at the beginning of a program, they are typically accompanied by change control boards and engineering change proposal (ECP) processes that many Agile practitioners report to be cumbersome, time-consuming, and expensive to complete.  Verification and validation. In respondent settings where test and evaluation (T&E) personnel were included explicitly as members of the development team, they reported that many of the T&E personnel found significant benefit in developing acceptance criteria for stories and creating acceptance test fragments for portions of the system being completed. They also gained significant insight into the architecture, quality attributes, and functioning of the system, all of which translated into better test readiness when independent test activities were undertaken. Organizational issues and changes to job expectations may hamper efforts to include T&E staff as full team members, but careful communication, coordination, and use of test resources can help alleviate some of these challenges.  Aligning the program roadmap with Agile increments. Agile development teams have worked with programs in a number of ways to attempt to tailor the existing milestones and reviews to more closely align with the manner in which Agile teams deliver software functionality. Many respondents engaged in new system development reported being granted the flexibility of tailoring program milestones such as the preliminary design review (PDR) and critical design review (CDR) by engaging systems engineers and program offices in sprint and iteration reviews. Several respondents reported negotiating for requirements (expressed as capabilities to evolve under the Agile paradigm) and then time-boxing their Agile increments against CDR targets.    Looking Ahead  One of the most important aspects of an Agile implementation is the use of the continuous improvement mechanism of retrospectives. During our interviews, we noted a variety of answers to the question If you could change one thing about Agile interaction with systems engineering, what would it be?  Several themes emerged from the responses, including  improved systems engineering discipline understanding of what Agile methods entail improved communication and reporting of progress for Agile teams alignment of Agile with policies and practices systems engineering process adaptation for Agile  cultural change necessary for the DoD to fully realize the promise of Agile  These retrospectives inform future areas of work, including the appropriate use of metrics for reporting and managing programs employing Agile software development and on DoD contract vehicles and provisions that support the use of these methods.  We welcome your feedback on our research.  Additional Resources To download our technical note detailing this research, Agile Software Teams: How They Engage with Systems Engineering on DoD Acquisition Programs, please visit http://resources.sei.cmu.edu/library/asset-view.cfm?assetid=295943.  To download the technical note, Potential Use of Agile Methods in Selected DoD Acquisitions: Requirements Development and Management, please visit http://resources.sei.cmu.edu/library/asset-view.cfm?assetid=89158.  To view all of our recent publications regarding Agile adoption in the Department of Defense, please visithttp://www.sei.cmu.edu/acquisition/research/. 
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:45pm</span>
Millions of professionals flock to Las Vegas each year for meetings and conventions, and the Society for Human Resource Management is once again part of that group in 2015. Corporate gatherings are big business in Las Vegas, where tourism and related travel activity represent the heart of the metro region’s economy. More than 41 million visitors came to Las Vegas in 2014, according to the Las Vegas Convention and Visitors Authority (LVCVA). Of that group, more than 5.1 million were convention delegates. The city hosted more than 22,000...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:45pm</span>
The first DWP Digital Academy in Fulham opened for business on 24 February 2014. It was set up to help us increase our capacity and grow more of our own digital capability across DWP. This week we reached an important milestone as our 100th student, Suzanne Butler, graduated from our Digital Academy. In this short film Suzanne talks about what she has learned at the Academy and how DWP is transforming by starting with the user. Keep in touch by following @DigitalDWP.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:44pm</span>
  Although the definition varies, millennials (also known as Generation Y) are typically considered to be those people who reached legal age around the turn of the 21st century. Currently, millennials comprise approximately 33 percent of the global work force, and estimates from the BPW Foundation project that by 2025, that number will increase to 75 percent. In short, millennials are the future of your business, and they must be managed effectively — and included in your succession planning. The Millennial Personal No generalization is universally true for all...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:43pm</span>
By Chris Taschner Project LeadCERT Cyber Security Solutions Directorate This post is the latest in a series to help organizations implement DevOps.  Software development teams often view software security as an afterthought, something that can be added on after the product is fully functional. Although this approach may have made some sense in the past, today it’s largely seen as a mistake since it can lead to unanticipated vulnerabilities in released code. DevOps provides a mechanism for change and enforcement when it comes to security. DevOps practitioners should find it natural to integrate a security focus into development iterations by adding security tests to their continuous integration process. Continuous integration is the practice of merging all development versions of a code base several times a day. This practice provides the same level of automated enforcement for security attributes as for other functional and non-functional attributes, ultimately leading to more secure, robust software systems. Making security testing a part of continuous integration enforces security standards on your software and identifies security as a first-class quality attribute of your project. Making this decision from the start on a new project enables those responsible for development and operations to make knowledgeable decisions about the architecture, design, and implementation with full consideration given to necessary security requirements. This process may mean choosing certain technologies over others based on security concerns. For instance, choosing to implement secure sockets layer (ssl) rather than sending data in the clear may improve application security. Being forced to make security decisions early may also mean that developers are incentivized to define expected development processes in a way that requires a certain level of security-focused unit test coverage for critical modules. For instance, employing tests to check that sql injection prevention is being employed properly.  By enforcing these decisions through continuous integration, teams can use their existing DevOps practices to ensure an unwavering—yet attainable and efficient—focus on software security. The image above represents one approach for adding security testing to the DevOps cycle.  While continuous security testing on new projects is clearly ideal, a strong argument exists for retrofitting security testing to continuous integration for ongoing software projects, even if security testing has been previously non-existent. As new features are secured, existing unchanged features may also see security benefits. Moreover, exposing the lack of security thinking in previous processes (e.g., by automating test coverage metrics or failing builds for security oversights) can motivate developers to refactor and secure previously unattended code. While this new security influence may take some time to propagate through existing codebases, fostering a security-aware culture in software development teams is a long-term win for any organization. Every Thursday, the SEI Blog will publish a new blog post that will offer guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:43pm</span>
Callum Davies - HR Fast Stream Graduate I joined my first discovery project 4 weeks ago - as part of the HR Product Owner team. I work within a digital capability team, mainly on permanent recruitment for digital talent, understanding how we can attract and retain some of the top talent out there. I’m also an HR Fast Stream Graduate freshly out of university with a degree in Philosophy. This was my first real, immersive experience of agile, and a fascinating one. It has been an incredibly different journey than traditional projects I have worked on within HR as there has been an immense focus on aesthetic working, sharing and constantly thinking outside the box to produce some brilliant ideas. It has been eye-opening as to how effective agile can be in such a small space of time with an incredible team and I can’t wait to see where else this could be applied within the department. We were discovering how to improve the way we attract people with the right talent and skills to apply for digital and technology roles in DWP. Surely that should be easy? Who wouldn’t want to work here, right? We just need a website to point everyone at and, job done. That might be where the ideas started, but thanks to agile, we ended up with a much broader, more interesting and hopefully more effective set of products. We went from ideas to an alpha product in 6 days of intensive teamwork, collaboration and challenge. What I loved about agile is that it made everyone take a step back and think about the users. Preconceived ideas about the solution don’t really work in the face of clear insight from users. We interviewed recent recruits to discover their experience and used this insight to design personas. We thought about Soph - she’s a JAVA developer, 25-32 years old, shares her code online, keeps up with her friends who work in similar jobs. Soph isn’t looking for a new job but keeps up to date with software development, and has an eye on her future when she plans to start a family while keeping her career moving. The turning point was when we mapped the user journey - we thought about how a user would get from being ‘not aware’, to ‘taking notice’, to ‘interested’, through to ‘finding out more’, then applying for a digital or technology role in DWP. Then we started building the alpha - a range of digital products across different channels, to take the user through the journey of being not aware to applying for a role, as smoothly as possible. We produced a core narrative, and this fed blogs, a twitter feed, a campaign page, a social media approach to engage with relevant sectors and professional audiences. We developed a working prototype, including graphic design and front-end development. The finale was a show and tell where people who will be recruiting into digital and technology roles in DWP liked what they saw, and agreed that the user-centred approach had delivered a better set of products. We’re now developing this into a public beta to launch in January 2015. Users will tell us whether it works, and evaluation of the user journeys will tell us which products are performing and which need to be improved. Then we’ll keep iterating.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:43pm</span>
By David Svoboda Member of the Technical Staff CERT Secure Coding Initiative A zero-day vulnerability refers to a software security vulnerability that has been exploited before any patch is published. In the past, vulnerabilities were widely exploited even when a patch was available, which means they were not zero-day. Today, zero-day vulnerabilities are common. Notorious examples include the recent Stuxnet and Operation Aurora exploits. Vulnerabilities may arise from a variety of sources, but most vulnerabilities are the result of simple coding errors. Consequently, developers need to understand common traps and pitfalls in the programming language, libraries, and platform to produce code that is free of vulnerabilities. To address this problem, CERT published The CERT Oracle Coding Standard for Java in 2011. This book is version 1 of this standard and was written primarily for Java SE 6, but also covers features introduced in Java SE 7. This coding standard provides secure coding rules that help programmers recognize and avoid vulnerabilities in their products. Each rule provides simple instructions regarding what a programmer must and must not do. Each rule description is accompanied by noncompliant code examples, as well as compliant solutions that can be used instead. In this blog post, I examine a Java zero-day vulnerability, CVE 2012-0507, which infected half a million Macintosh computers, and consider how this exploit could have been prevented through adherence to two secure coding rules. Java Security Background Java was designed in the early 1990s with security in mind. Java’s creators wanted it to have the ability to run untrusted code that could be immediately delivered and run over the network. With the growth of the World Wide Web, this requirement translated into the ability to run Java applets over the web. Because an applet need not be trusted, Java required that each applet run in a security sandbox maintained by an object called the SecurityManager. This object acts as a chaperone, overseeing what an applet does and generating a SecurityException if the applet violates a security policy. A SecurityException typically causes the applet to terminate without disturbing the rest of the user’s browsing experience. Actions forbidden by the SecurityManager include accessing the file system accessing the network (except the host it came from) running external programs disabling the SecurityManager However, a signed applet may request from the user the ability to do some or all of these privileged actions. Java’s huge (and open-source) core library provides a large attack surface that can conceal many vulnerabilities. As Java evolves, the library grows, and hackers know that a large codebase means many new vulnerabilities can be found and exploited. Each vulnerability is a golden opportunity for hackers to gain recognition among their peers or to sell their exploit for big bucks on the black market. Roger A. Grimes reported in InfoWorld that an increasing number of governments are buying hackers’ exploits—or hiring the hackers. CVE 2012-0507 and the Flashback Trojan Jeroen Frijters, technical director of Sumatra Software, first discovered the vulnerability later classified as CVE 2012-0507 in 2011 while developing IKVM, a Java Virtual Machine (JVM) for .NET. Frijters practiced responsible disclosure by notifying Oracle when he discovered the vulnerability, coordinating his publication of the vulnerability details with Oracle’s release of an update to Java (1.7.0_03), which disabled the vulnerability in February 2012. Meanwhile, the Mac Flashback Trojan had languished in obscurity for several months until it was altered to exploit the Java vulnerability in March 2012. Apple had supported Java ever since Mac OS X was released in 2001. Moreover, it had distributed and updated Java as part of OS X itself. Unfortunately, Apple had not applied Oracle’s patch until the exploit started attacking Macs in the wild. After its makeover, Flashback managed to infect over 500,000 Apple computers, and 22,000 Apple computers remained infected as of January 2014. Apple has unbundled Java from OS X and no longer distributes Java. Mac users who wish to install Java must now download it from Oracle. Preventing Flashback Earlier this year, Milton Smith, who leads the strategic security program for Java products at Oracle, asked Robert Seacord and me to participate as reviewers on the JavaOne 2014 Security Track review team. He also encouraged us to propose a Java exploit presentation for the track. Our presentation focused on how the Flashback vulnerability could have been prevented. To prevent future exploits, it is important to understand how these mistakes occurred. Two issues were at play in Flashback, which the developers could have prevented had they followed our Java coding rules: One common principle of object-oriented design is that every class has data that is private; only that class can manipulate that data. The AtomicReferenceArray class contains a private array but does not ensure that this array remains private. This class is also serializable, which allows an object of this class to be written to a file that can later be deserialized, that is, read back into memory. The diagram below illustrates the data structure they produced. This data structure could not have been produced by running Java because the AtomicReferenceArray would not have allowed its private array to be accessed by any other data structure. The attackers serialized an AtomicReferenceArrayobject to a file. They modified this file so that the deserialized object contained a public handle to the AtomicReferenceArray’s private array. Deserializing this data structure was possible because the AtomicReferenceArray failed to override the default deserialization method, which allowed outside access to the internal array. Failure to override the default deserialization method violates the following CERT rule: SER07-J Do not use the default serialized form for classes with implementation-defined invariants Oracle mitigated the vulnerability by adding the following method to AtomicReferenceArray:   private void readObject(java.io.ObjectInputStream s)                throws java.io.IOException,                       ClassNotFoundException          Object a = s.readFields().get("array", null);          if (a == null || !a.getClass().isArray())                 throw new java.io.InvalidObjectException                         "Not array type");          if (a.getClass() != Object[].class)                 a = Arrays.copyOf((Object[])a, Array.getLength(a),                         Object[].class);         unsafe.putObjectVolatile(this, arrayFieldOffset, a);} This method is invoked whenever an AtomicReferenceArrayis deserialized. By making a private copy of its internal array if the array is not of type Object, this class guarantees the privacy of its internal array, disabling the exploit. The second issue allowed an attacker to insert an object into the AtomicReferenceArray’s private array and then extract it as an object of a different class. The JVM was consequently tricked into thinking that the object is of a different type than it actually is. In the Flashback exploit, MalClassLoader is a malicious class derived from the benign ClassLoader class, which contains a defineClass() method that allows the creation of new code. This method is accessible only to subclasses, including the malicious MalClassLoader class. The Java security manager prevents an unprivileged applet from creating custom class loaders. Unfortunately, the exploit tricks the JVM into believing that a preexisting class loader object is actually a MalClassLoader object. Consequently, this object could use defineClass() to build a new Java class from an array of bytes and execute its default constructor. This new class is created with full privileges; the security manager was instructed not to prevent the new class from doing anything. The new class then disabled the security manager, which effectively enabled the rest of the exploit code to do anything normally forbidden by the SecurityManager. Java defines heap pollution as a condition in which one class is expected to contain elements of one type but can inadvertently contain an element of another type. The JVM has mechanisms to prevent heap pollution or detect if it has happened, but heap pollution can still be successfully exploited by an attacker. In confusing a ClassLoader object with a MalClassLoaderobject, this exploit polluted the AtomicReferenceArray object, which violates the following CERT rule: OBJ03-J Prevent heap pollution In Version 1.0 of the standard, OBJ03-J was titled "Do not mix generic with nongeneric raw types in new code," but was updated in response to community feedback. Future Work Java is a victim of its success: its widespread use and popularity have made it a lucrative target for hackers, similar to Microsoft Windows. Java includes a huge library with many features, some of which are obsolete or deprecated. To prevent future Java vulnerabilities, Oracle  is working to comply with Java Secure Coding rules during ongoing development and maintenance of the Java codebase. Likewise, compliance with these rules is critical for Java developers outside of Oracle even if their software is not used as widely as Java itself. We are currently updating the CERT Oracle Coding Standard for Java for Java SE 8 on our community wiki. We encourage the community to participate in the continuing evolution of the standard. Anyone is free to browse the standard and submit comments, and experts are frequently invited to become editors. Community involvement is essential to producing a high-quality coding standard. There are both commercial and open-source tools such as FindBugs that identify potential violations of the Java Secure Coding rules.  Additional Resources To sign up for a free account on the CERT Secure Coding wiki, please visit http://www.securecoding.cert.org. To subscribe to our Secure Coding eNewsletter, please click here. For more information about the CERT Oracle Secure Coding Standard for Java, please visithttps://www.securecoding.cert.org/confluence/display/java/The+CERT+Oracle+Coding+Standard+for+Java.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:42pm</span>
Chris Beardsell I’m Chris Beardsell and I’ve been working as a User Researcher in DWP for a bit under 1 year - but what brought me here? I’m constantly, endlessly, irritatingly curious about users. I want to know what makes them tick, what online services they use, what’s in their head when they think about using a government service. Users are fascinating and I want to understand more about their needs - they don’t always think like me or the organisation, they don’t act the way I might expect, they want different things to meet their needs, not my or the organisation’s needs. So how do I satisfy my curiosity? I devise different research strategies, depending on the users or the service that’s being designed. I might use focus groups, maybe a questionnaire, maybe bring users into user testing laboratory conditions to explore them in more detail. And then there’s the analysis of what I find. Actually, I like that as much as I like the curiosity. I analyse the results, identify trends, feed these into the development work for the team to come up with prototypes. But I don’t work alone - I might sound like an introvert (if anything I’m an extrovert, but let’s not get into labels…) - but collaborating with everyone in the agile team is a real buzz. All the user research in the world is worthless unless I can communicate the insights so that the team gets a strong and shared understanding of the user. The user research will define the design and development of services from early stage concept and prototypes through to building alphas and betas. But why did I indulge my curiosity as a user researcher in DWP? It’s so I can make a difference. Every day, DWP helps almost 10,000 people move off Jobseeker’s Allowance. Every year, DWP takes 4m job vacancies for 330,000 employers and processes 7.35m benefit claims. In 12 months DWP will collect or arrange over £1.2bn of child maintenance on behalf of 900,000 children. And pay 22m customers £165bn in benefits and pensions. Think of all those users - we’re designing and delivering public services that millions of people rely on. We transform lives by helping the most disadvantaged people to turn their lives around. Right now, it’s a great place to be a user researcher.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:42pm</span>
Well, "Mad Men" is no more. As AMC marketed it, we have come to an "end of an era." Or have we? While it was only a television show, or so people try to tell me, the workplace implications resonated with so many of us in the HR/business community. Perhaps that is because, while much has changed, some things are still painfully similar. Here are six themes from Mad Men which are as relevant today as they were in the "Mad Men days." Knowing...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:41pm</span>
By Tim PalkoSenior Member of the Technical Staff CERT Cyber Security Solutions Directorate This post is the latest in a series to help organizations implement DevOps. Environment parity is the ideal state where the various environments in which code is executed behave equivalently. The lack of environment parity is one of the more frustrating and tenacious aspects of software development. Deployments and development both fall victim to this pitfall too often, reducing stability, predictability, and productivity. When parity is not achieved, environments behave differently, which makes troubleshooting hard and can make collaboration seem impossible. This lack of parity is a burden for too many developers and operational staff.  Looking back on almost every problem I have seen in new production deployments, I find it hard to think of one issue that wasn't due in some part to lack of parity. For developers, this pain is felt when integrating and testing code. In a traditional sense, the problem is already solved. Virtualization is old news, even for personal machines, enabling developers to recreate the actual deployment target platforms for their local development. Provisioning an environment is a somewhat older trick with origins as old as shell scripts, made even more robust with the advent of automated environment provisioning tools like Chef and Puppet. So, why is parity still an issue? Or is it? The job of an operations team often involves implementing full parity across deployment environments, such as those used for testing, staging, and production. Conversely, the development team is almost entirely responsible for provisioning development machines. To achieve 100 percent parity between both sets of environments, both teams must speak the same language and use the same resources. Chef and Puppet, both crafted for the operations role, are just slightly out of reach for a busy developer. Each has a respectable learning curve, and neither really solves the parity problem completely: developers still need to virtualize the correct production target platform. All this additional work incurs a decent amount of overhead when you just want to write code! This is where Vagrant comes in. Vagrant is a developer's tool that basically serves up a virtualized and provisioned environment to developers using operations tools with a single, declarative script and a simple command-line interface. Vagrant cuts out the grunt work needed to stand up a virtual machine (VM) and it removes the need to configure or run, for example, chef-server and chef-client. Vagrant hides all of this and leaves the developer with a simple script, an extensionless file named Vagrantfile, which can be checked into source control along with the code. Let's look at an example Vagrantfile that stands up and provisions a RHEL 6.5 server with PostgreSQL and Nginx: Vagrant.configure("2") do |config| config.vm.box = "rhouinard/oracle-65-x64" config.vm.network :forwarded_port, guest:80, host:8000 config.vm.provision :chef_solo do |chef| chef.add_recipe "postgresql::server" chef.add_recipe "postgresql::client" chef.add_recipe "nginx" chef.json.merge!({ :postgresql =&gt; { :password =&gt; { :postgres =&gt; "idontlikerandompasswords" }, :pg_hba =&gt; [{ :type =&gt; 'host', :db =&gt; 'mydb', :user =&gt; 'mydbuser', :addr =&gt; '127.0.0.1/32', :method =&gt; 'md5' }] } }) end end This isn't the simplest example, but it shows us a sample of what we can do with Vagrant. Reading from the top, we see that we are declaring a "box" or target VM: RHEL 6.5. Vagrant will fetch this box from its own cloud provider (Vagrant Cloud), a network file system, or another URL, and work with VirtualBox automatically to stand it up. Next in the example, we forward activity from the host’s (your development machine) port 8000 to the guest VM on port 80, which allows us to run a web server listening on port 80 inside the VM, but test it by hitting port 8000 on our host system. At this point you may be asking what good this does, because isn't our code stuck on the host machine? Actually, Vagrant gives us the root folder of the project as a network share inside the VM. So, the web server in the VM can execute project code, but we still have control over it in the development environment on the host. That is Vagrant in a nutshell. But, don't forget the chef-solo provisioner block, which ultimately gives us 100 percent parity with our test and production environments. We don't need to worry about running chef as a provisioner—Vagrant will do that for us. As developers, we just need to work with the operations team to make sure this configuration is accurate and fulfills the requirements. How does it all work? The following command will kick off the entire process, and at the end, you will have a running, provisioned VM: $ vagrant up This command will start a secure shell on the VM: $ vagrant ssh There are many other Vagrant commands to reprovision, suspend, resume, and restart the VM, as well as manage the Vagrant boxes themselves. For example, you could preconfigure a VM for the team if there is a special requirement, repackage that VM as a custom Vagrant box, and distribute it on a network share. In this case, config.vm.box would look something like: config.vm.box = "http://file-share/vagrant-boxes/oracle-65-x64.box" Vagrant works with VirtualBox, but it also works with VMware Fusion or Workstation, and with some finagling can even stand up and provision VMs on an ESX host. Every Thursday, the SEI will publish a new blog post that will offer guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To read all the installments in our weekly DevOps series, please click here. To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:41pm</span>
Ben Holliday - Head of User Experience Design As part of our digital transformation we’re looking for people to work with us in new digital design roles. Our aim is to build DWP digital services for GOV.UK that are so good, people prefer to use them over the alternatives. To achieve this we need to design solutions that meet user needs. These roles we’ve created are all an essential part of this process: Interaction Designers Content Designers Front End Developers While these roles aren’t completely new to the department, we’re now building a design team with these specialist skills. Not only will these people work separately within our different project teams, they’ll also be working together. This is essential as we focus less on products and more on services that create a consistent user experience when people interact with DWP throughout their lives. What we mean by ‘user experience’ and ‘design’ When we talk about ‘design’, we’re not talking about how things look. Design is the process we use to solve problems and make things work - it helps us understand and deal with the complexity of government in order to deliver services. These services should work well for the people that use them - they should meet their needs, or simply help them to get on with their lives. We believe that ‘user-centred’ or ‘user experience’ (UX) design isn’t, and shouldn’t be seen as, an exclusive job role. The user experience of our services is ultimately the result of the decisions and the work of an entire team. This is what why we’re building capability through our DWP Digital Academy, equipping multi-disciplinary teams to work together to design and deliver digital services. We can’t ignore the realities of business or policy, so having digital design specialists on each of these teams helps us find solutions when we face difficult product decisions - resolving the tension between user needs and the constraints we’re working with. How we deliver services to meet user needs We work using agile methods and try to make sure all our teams have an Interaction Designer and Content Designer working closely together with User Researchers on every project. Interaction Designers focus on all aspects of the service that directly affect the end experience of users. They care about each of the individual interactions within a user interface, but also understand and influence how interactions shape user journeys across one or multiple channels. Interaction Designers will work with Front End Developers - both of these roles focus on creating prototypes that enable us to regularly test our work with real users. We also have specialist roles for Content Designers. Almost everything we design is content - from explaining benefit entitlement to writing clear and concise content that helps people use our digital services. Content Designers are as comfortable writing long-form as they are labelling individual form fields. I briefly mentioned Front End Developers. They’re also part of every project and help us build and test user journeys, working in real code as soon as possible. As well as prototyping, they work with other Developers as services progress through Alpha, and Beta, moving into more technical environments as projects progress. Interested in joining us? All the roles we’re currently advertising will be based at one of our digital hubs in London, Leeds, or Newcastle working on a range of new and existing digital projects. Visit the individual job listings or Civil Service Jobs for more information. These roles close on January 16th.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:40pm</span>
By Julien DelangeMember of the Technical StaffSoftware Solutions Division Over the years, software architects and developers have designed many methods and metrics to evaluate software complexity and its impact on quality attributes, such as maintainability, quality, and performance. Existing studies and experiences have shown that highly complex systems are harder to understand, maintain, and upgrade. Managing software complexity is therefore useful, especially for software that must be maintained for many years. To generate the complexity metrics, tools extract applicable data—such as source lines of code, cohesion, coupling, and more—from binary or source code to analyze the software and report its complexity and quality. Several tools support these techniques and help stakeholders manage the evolution of system development, provide quality improvements, prevent lack of cohesion, and perform other tasks. To date, such approaches have been successfully used in many projects, but as system development moves toward model-based engineering, these methods, metrics, and tools might not be sufficient to manage model complexity. This blog post details the state of the art for reporting model complexity and introduces research underway at the SEI in this area. Complexity and Model-Based Engineering In the domain of embedded systems, projects are increasingly adopting model-based engineering tools, such as SCADE or Simulink, to specify and capture the functional architecture. Thanks to code generators, the bulk of these systems are no longer implemented manually, but instead are generated automatically from models. Code is then created from these abstract representations, with the result that code metrics no longer match engineering efforts. A simple change on a single model component can modify hundreds of lines of code, and modifying hundreds of model components might have little impact on the generated code. In fact, changes in the model are not always proportional to source code changes. For these reasons, code-analysis techniques (at the binary- or source-code level) cannot be used, and new methods must be developed to evaluate the quality and complexity of these auto-generated models. These issues have been studied for several years, and interest in the topic continues to grow. Some work has focused on mapping existing source-code metrics (for example, Halstead  or the cyclomatic complexity) approaches to models (as in the research by Jeevan Prabhu), whereas others have proposed new metrics (such as the structure, data complexity, or the component instability as defined by Marta Olszewska). Regardless of the selected technique, the goal is to analyze the impact of a change and overall quality of the model by analyzing various aspects, such as the number of blocks, number of connections, nesting level, and definition of data types. Tools also report metrics from the models—for example, sldiagnostics and its front-end report metrics of Simulink models.  Reducing Complexity of Models As model-based systems evolve, they are modified, updated, and integrated with more components. Moreover, as more functions are now implemented using software, models become more complex (with many inter-connected components that have potentially conflicting requirements), which makes their verification, analysis, and maintenance harder. For these reasons, detecting system complexity as early as possible can help developers manage it and keep it below a critical threshold. The existing tools mentioned previously help designers by producing a single value that reflects the quality and complexity of a system. Hence, they are useful to manage system evolution. However, these tools do not detail how to reduce complexity and improve system quality. On the other hand, reducing complexity and improving system quality is the goal of having these metrics: ultimately, system stakeholders want to keep the quality of system artifacts under control and fix potential defects or reduce sources of complexity. Among the contributors to complexity, a correct use of data types is particularly important. For example, to specify a command to an actuator, using an enumerated type with restricted values is more accurate than using a generic type (such as an integer). This might impact system quality and makes system analysis, testing and certification more difficult than when using restricted type (which can reduce the system state space). On the other hand, many models rely intensively on generic types, such as Boolean or integer, which are not appropriate when data values are limited, as with a system state or the value of a command. Modeling guidelines recommend using enumerated types as much as possible, but engineers don’t often do so, and the resulting models lack data abstraction and incur system complexity (such as an increasing number of interfaces or states). For example, consider a system with a component representing a door sensor sending the actual status (open or closed). Developers could take different implementation strategies: Using Boolean types: The block will send the status using two Boolean variables—one to indicate that the door is open, the other to indicate that the door is closed. Using enumerated types: The block will send the status using a single variable that indicates the status (open or closed). Using the first method, both variables can be true, meaning that the door can be open and closed at the same time. Using the second method reduces the block complexity: it reduces the number of variables by 50 percent, and it ensures consistency because the sensor can report only one possible status. Using appropriate data abstraction provides many other benefits (such as strong type checking) and will definitively help engineers to reduce complexity and avoid errors. Applying such abstraction on real systems with hundreds of variables might not reduce system complexity by half, but it will have a significant impact. Our Approach  The SEI is dedicated to helping organizations manage software complexity more effectively, especially for systems that must be maintained and upgraded over years, such as those in the avionics, aerospace or automotive domains. So far in this post, I have detailed the state of the art for reporting model complexity. I will conclude by introducing research now underway at the SEI to address the issue of managing model complexity. As more developers of embedded systems adopt model-based methods, avoiding complexity as early as possible ensures that it does not propagate through the development process. I am collaborating with a group of SEI researchers who are actively working to identify the root cause of complexity in models and propose design alternatives to reduce complexity and improve system quality. The project will propose an approach to qualify and quantify complexity in models, ideally leveraging existing metrics and applying them in models while trying to propose solutions to re-design the system and adopt modeling patterns that will avoid complexity. Our work will focus on using existing metrics (such as the Cyclomatic Complexity) on models but also find new ones to detect emerging complexity. For example, one idea is to focus on data abstraction (e.g., using enumerated types rather than generic ones, as explained previously). These metrics will then be reused by tools to help system designers propose implementation alternatives that avoid this complexity. Through earlier detection of emerging model complexity our research aims to ensure that the issues related with complexity, such as rework costs and finding issues late in the development process, will not be propagated through the development process. Over the long term, it would reduce re-engineering efforts and costs associated with maintenance and testing activities. We welcome your feedback on our research in the comments section below.  Additional Resources To read the paper, Simulink-Specific Design Quality Metrics by Marta Olszewska, please visithttp://tucs.fi/publications/view/?pub_id=tOl11a.  To read the paper, Complexity Analysis of Simulink Models to improve the Quality of Outsourcing in an Automotive Company by J. Prabhu, please visithttp://www.engpaper.com/complexity-analysis-of%C2%A0simulink-models-to-improve-the-quality-of-outsourcing-in-an-automotive-company.htm. 
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:40pm</span>
  We get it. You’re tired of talking about Millennials. But look around your workplace. How is this generation affecting your workplace culture? How have they affected the way you work? What changes has your organization made in the way it hires and operates in order to attract and accommodate millennial talent?  As they slowly, but surely, become a workplace majority, organizations are grappling with the question of how to integrate the different ideas, skills, mannerisms and beliefs that this generation brings to the...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:39pm</span>
Mayank Prakash - Director General, Digital Technology - DWP Hello. My name is Mayank Prakash and I’m the Director General, Digital Technology at the Department for Work and Pensions. I started at DWP back in November, and so far it’s been a hugely exciting introduction to the world of government technology. I’ve spent a lot of time meeting people and learning, and getting to grips with how it all fits together. DWP is the country’s biggest public service department. It handles pensions and benefits - its work affects the lives of millions of people every day. We’re looking for a Chief Technology Architect. This is the first in a series of new appointments I want to make, the first step towards building a new team of talented people to complement the existing expertise. We believe that technology is integral to business delivery; we’re here to deliver public services to millions of citizens, and we can use technology to make those services better. We want to replace our critical systems with newer digital services we’ve already started to build, using modern technologies such as Node.js, MongoDB and Hadoop. This is big, serious, heavy-duty technical change, an exciting challenge for the right architect. We will be implementing GDS Design Principles, and I’m looking forward to making big changes over the next five years. We’ll be building new platforms and APIs based on open standards and open data. If that sounds like something you want to be part of, find out more about the Chief Technology Architect role in DWP Digital Technology.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:39pm</span>
C. Aaron Cois Software Engineering Team Lead CERT Cyber Security Solutions DirectorateThis post is the latest in a weekly series to help organizations implement DevOps. On the surface, DevOps sounds great. Automation, collaboration, efficiency—all things you want for your team and organization. But where do you begin? DevOps promises high return on investment in exchange for a significant shift in culture, process, and technology. Substantially changing any one of those things in an established organization can feel like a superhuman feat. So, how can you start your organization on the path to DevOps without compromising your existing business goals and trajectories? This is no easy question, and the answer is different for every organization. The first step is to not focus on automation or new technologies. Instead, look at your current team culture and processes, and identify the biggest sources of risk and inefficiency. A DevOps strategy for your organization should be designed and implemented to address these issues. Implementing a solid DevOps strategy often requires the introduction of new technologies, as with organizations that don't have a standard issue-tracking system in place across all teams or have inconsistent version-control practices.  However, the ultimate goal should be improving communication and process. In many cases the most important solutions are process-based, and may even be informal adjustments to team behaviors. Have Ops (operational) staff been invited to your development project kickoff meetings? If not, isn't that a great opportunity to engender broader support for your project, and get Ops feedback on potential wins or risks for the organization from their perspective? The overarching principle of DevOps is to align the goals of every worker with your ultimate business goal: a fully functional system, running smoothly in production, and delighting customers. Achieving this alignment must be the primary objective shared by all staff, including both Dev (developers) and Ops. Open communication is the first step in aligning goals across your teams. Also, ask questions about goals and incentives: Are your developers held accountable for the success or failure of their code in production? Or, are they actually working toward the internal team goal of handing the packaged code to the Ops team and letting them handle deployment? This bifurcation is a recipe for risk—everyone in the project must be incentivized to make decisions that will ensure both a successful deployment and a reliable system in the field. Often, the biggest cultural win comes from focusing all stakeholders on the ultimate business goal, instead of their isolated team goals, and giving them the space to optimize their processes to achieve this ultimate objective. Every Thursday, the SEI will publish a new blog post that will offer guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To read all the installments in our weekly DevOps series, please click here. To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:39pm</span>
Ben Holliday At the start of a project in ‘Discovery’ your team’s job is to learn as much as possible about what your users need to do. Your goal is to understand the user needs for the service before you start to design or prioritise solutions. Jon Kolko describes this process beautifully in his book ‘Well designed: How to use empathy to create products that people love’: "Your job is to help your team ship the right product to your users. Your job is to figure out who your users are, what they want to be able to do, and what the right products are to help them do that. You’ll need to spend time with the people who are going to use your product and watch them do whatever it is they do. Your goal is to both understand them and empathise with them." Understanding user needs is more about having a willingness to learn, than knowing what you’re doing. The important thing in ‘Discovery’ is to go and see things for yourself. Depending on your job role, this might feel uncomfortable or put you in unfamiliar situations. This is all part of the learning opportunity to see things differently. Experiences like this help us to break away from existing assumptions about who our users are and what they need. Try to involve your entire team in this process. If you do have a dedicated user researcher in your team they’ll be able to help you. If not, talk to your User Research team for advice. Start with what happens now You want to know what people need to do, so a good place to start is to find out what they do now. When you’re working on a digital service, it’s rare that this will be solving a completely new problem. Start by looking at existing channels. For some services this might include telephony (phone services), a paper process, or face-to-face interactions with front-line staff. Think about who your service is for. As you start to learn more about what these people need, you’ll begin to understand the details of what they do - where, when, and how frequently. An example - Cold Weather Payments An example project we use in the Digital Academy is Cold Weather Payments. This is where, if you’re getting certain benefits, you may get a Cold Weather Payment of £25 for each 7-day period of very cold weather between 1 November and 31 March. We know that, broadly speaking, the people that benefit from this service are pensioners, and people that get income-related benefits. A good place to start would be to talk to the people in these groups. You could even start with your own friends and family if they get Cold Weather Payments. We also know that we tell people on GOV.UK to contact their local Jobcentre Plus or pension centre if they have questions about Cold Weather Payments, for example, if they think they have a missing payment. We could start by talking to staff about the types of customers that contact them or arrange to listen to calls to understand the support and advice being provided. We could even call existing helplines to ask questions ‘as a customer’ so we can experience what it’s really like to use the service. Any of these approaches should start to show us what happens now. For example, the policy intent for Cold Weather Payments is that people will be able to afford to put their heating on, but what if we found that most people spent the extra money on something else? Maybe people don’t know they’re getting Cold Weather Payments so don’t turn the heating on when the weather is cold because they don’t realise they can afford to. With this type of insight we can start to understand that people have a need to know they’re getting Cold Weather Payments so they know they can afford to put the heating on and use this money for its intended purpose. Make a research plan It’s important to have a plan, but it’s equally important to keep planning to a minimum - don’t use this as an excuse not to get started because time is often limited at the start of projects. Assume that, at least to start with, you’re probably going to ask the wrong questions and have incorrect assumptions about users - this is the whole point of ‘Discovery’. Start by writing a short research plan. Keep this simple, but include:  a summary of what you want to learn (use bullet points for any key questions you need to answer)  a set of opened-ended questions you can use when talking to people (don’t include more than 10 questions - these should help you focus on what you want to learn, acting more as prompts, rather than like ‘survey’ questions). When you talk to people, let the conversation flow rather than sticking rigidly to your plan. The most difficult skill to master is giving people the space to talk without interrupting them (this is harder than you think). As you start to learn you’ll want to make adjustments to your initial set of questions, so don’t be afraid to do this as you go along. Most importantly, make sure that you don’t ask people questions about their preferences - we’re interested in what they do, rather than their opinions. Keep a record of what you learn Find ways to record and keep a record of everything you’re learning. This will help you to share what you find with the rest of your team. If you’re working on your own try to take some notes. If you can work in a pair let someone else take notes while you ask questions. Most importantly, take time out to write down key observations or quotes. It’s a good idea to use post-it notes so you can stick up observations in your project space to discuss with your team. Don’t treat ‘Discovery’ as an exact science At the ‘Discovery’ stage of a project we’re interested in empathy and understanding. It’s not about how much research you do, it’s about how well you understand the needs of your users. This way you’ll be better equipped to make difficult product decisions as you prioritise work to build your service.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:39pm</span>
By Douglas C. Schmidt Principal Researcher In 2014, the SEI blog has experienced unprecedented growth, with visitors in record numbers learning more about our work in big data, secure coding for Android, malware analysis, Heartbleed, and V Models for Testing. In 2014 (through December 21), the SEI blog logged 129,000 visits, nearly double the entire 2013 yearly total of 66,757 visits. As we look back on the last 12 months, this blog posting highlights our 10 most popular blog posts (based on the number of visits). As we did with our mid-year review, we will include links to additional related resources that readers might find of interest. When possible, we grouped posts by research area to make it easier for readers to learn about related areas of work. This blog post first presents the top 10 posts and then provides a deeper dive into each area of research.  Using V Model for Testing Two Secure Coding Tools for Analyzing Android Apps (secure coding) Common Testing Problems: Pitfalls to Prevent and Navigate Four Principles of Engineering Scalable, Big Data Systems (big data) A New Approach to Prioritizing Malware Analysis Secure Coding for the Android Platform (secure coding) A Generalized Model for Automated DevOps (DevOps) Writing Effective Yara Signatures to Identify Malware  An Introduction to DevOps (DevOps) The Importance of Software Architecture in Big Data Systems (big data) 1. Using V Model for Testing Don Firesmith’s post, Using V Models for Testing, which was published in November 2013, remains the most popular post. In the post, Firesmith introduces three variants on the traditional V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method. The V model builds on the traditional waterfall model of system or software development by emphasizing verification and validation. The V model takes the bottom half of the waterfall model and bends it upward into the form of a V, so that the activities on the right verify or validate the work products of the activity on the left. More specifically, the left side of the V represents the analysis activities that decompose users’ needs into small, manageable pieces, while the right side of the V shows the corresponding synthesis activities that aggregate (and test) these pieces into a system that meets users’ needs. The single V model modifies the nodes of the traditional V model to represent the executable work products to be tested rather than the activities used to produce them. The double V model adds a second V to show the type of tests corresponding to each of these executable work products. The triple V model adds a third V to illustrate the importance of verifying the tests to determine whether they contain defects that could stop or delay testing or lead to false positive or false negative test results. In the triple-V model, it is not required or even advisable to wait until the right side of the V to perform testing. Unlike the traditional model, where tests may be developed but not executed until the code exists (i.e., the right side of the V), with executable requirements and architecture models, tests can now be executed on the left side of the V. Readers interested in finding out more about Firesmith’s work in this field, can view the following resources: Book: Common System and Software Testing Pitfalls Podcast: Three Variations on the V Model for System and Software Testing 2. Two Secure Coding Tools for Analyzing Android Apps (secure coding)6. Secure Coding for the Android Platform (secure coding) One of the most popular areas of research among SEI blog readers so far this year has been the series of posts highlighting our work on secure coding for the Android platform. Android is an important area to focus on, given its mobile device market dominance (82 percent of worldwide market share in the third quarter of 2013), the adoption of Android by the Department of Defense, and the emergence of popular massive open online courses on Android programming and security. Since its publication in late April, the post Two Secure Coding Tools for Analyzing Android Apps, by Will Klieber and Lori Flynn, has been the second most popular post on our site. The post highlights a tool they developed, DidFail, that addresses a problem often seen in information flow analysis: the leakage of sensitive information from a sensitive source to a restricted sink (taint flow). Previous static analyzers for Android taint flow did not combine precise analysis within components with analysis of communication between Android components (intent flows). CERT’s new tool analyzes taint flow for sets of Android apps, not only single apps.  DidFail is available to the public as a free download. Also available is a small test suite of apps that demonstrates the functionality that DidFail provides. The second tool, which was developed for a limited audience and is not yet publicly available, addresses activity hijacking attacks, which occur when a malicious app receives a message (an intent) that was intended for another app, but not explicitly designated for it. The post by Klieber and Flynn is the latest in a series detailing the CERT Secure Coding team’s work on techniques and tools for analyzing code for mobile computing platforms.  In April, Flynn also wrote a post, Secure Coding for the Android Platform, the sixth most popular post in 2014. In that post, Flynn highlights secure coding rules and guidelines specific to the use of Java in the Android platform. Although the CERT Secure Coding Team has developed secure coding rules and guidelines for Java, prior to 2013 the team had not developed a set of secure coding rules that were specific to Java’s application in the Android platform. Flynn’s post discusses our initial set of Android rules and guidelines, which include mapping our existing Java secure coding rules and guidelines to Android and creating new Android-specific rules for Java secure coding. Readers interested in finding out more about the CERT Secure Coding Team’s work in secure coding for the Android platform can view the following additional resources:  Paper: Android Taint Flow Analysis for App Sets (SOAP 2014 workshop) Presentation: Android Taint Flow Analysis for App Sets Thesis: Precise Static Analysis of Taint Flow for Android Application Sets CERT Secure Coding Rules and Guidelines: CERT Secure Coding Rules and Guidelines for Android wiki 3. Common Testing Problems: Pitfalls to Prevent and Navigate A widely cited study for the National Institute of Standards & Technology (NIST) reports that inadequate testing methods and tools annually cost the U.S. economy between $22.2 billion and $59.5 billion, with roughly half of these costs borne by software developers in the form of extra testing and half by software users in the form of failure avoidance and mitigation efforts. The same study notes that between 25 percent and 90 percent of software development budgets are often spent on testing. In his series on testing, Don Firesmith highlights results of an analysis that documents problems that commonly occur during testing. Specifically, this series of posts identifies and describes 77 testing problems organized into 14 categories; lists potential symptoms by which each can be recognized, potential negative consequences, and potential causes; and makes recommendations for preventing them or mitigating their effects. Here’s an excerpt from the first post, Common Testing Problems: Pitfalls to Prevent and Navigate, which focused on general testing problems that are not specific to any type of testing, but apply to all different types of testing:  Clearly, there are major problems with the efficiency and effectiveness of testing as it is currently performed in practice. In the course of three decades of developing systems and software—as well my involvement in numerous independent technical assessments of development projects—I have identified and analyzed testing-related problems that other engineers, managers, and I have observed to commonly occur during testing. I also solicited feedback from various LinkedIn groups (such as Bug Free: Discussions in Software Testing, Software Testing and Quality Assurance) and the International Council on Systems Engineering (INCOSE). As of March 2013, I have received and incorporated feedback from 29 reviewers in 10 countries. While the resulting framework of problems can apply to both software and systems testing, it emphasizes software because that is where most of the testing problems occur.  Readers interested in finding out more about Firesmith’s research in this area can view the following additional resources:  Presentation: Common Testing Problems: Pitfalls to Prevent and Mitigate, and the associated Checklist Including Symptoms and Recommendations, which were presented at the FAA Verification and Validation Summit 8 (2012) in Atlantic City, New Jersey on 10 October 2012. 4. Four Principles of Engineering Scalable, Big Data Systems (big data)10. The Importance of Software Architecture in Big Data Systems (big data) New data sources, ranging from diverse business transactions to social media, high-resolution sensors, and the Internet of Things, are creating a digital tsunami of big data that must be captured, processed, integrated, analyzed, and archived. Big data systems that store and analyze petabytes of data are becoming increasingly common in many application domains. These systems represent major, long-term investments, requiring considerable financial commitments and massive scale software and system deployments. With analysts estimating data storage growth at 30 percent to 60 percent per year, organizations must develop a long-term strategy to address the challenge of managing projects that analyze exponentially growing data sets with predictable, linear costs.  In his popular, ongoing Big Data series on the SEI blog, researcher Ian Gorton continues to describe the software engineering challenges of big data systems. His most popular post in this series, Four Principles of Engineering Scalable, Big Data Systems, offers four principles that hold for any scalable, big data systems. These principles can help architects continually validate major design decisions across development iterations, and hence provide a guide through the complex collection of design trade-offs all big data systems requires. Here’s an excerpt:  In earlier posts on big data, I have written about how long-held design approaches for software systems simply don’t work as we build larger, scalable big data systems. Examples of design factors that must be addressed for success at scale include the need to handle the ever-present failures that occur at scale, assure the necessary levels of availability and responsiveness, and devise optimizations that drive down costs. Of course, the required application functionality and engineering constraints, such as schedule and budgets, directly impact the manner in which these factors manifest themselves in any specific big data system.  In The Importance of Software Architecture in Big Data Systems, the 10th most popular post in 2014, Gorton continues to address the software engineering challenges of big data, by exploring how the nature of building highly scalable, long-lived big data applications influences iterative and incremental design approaches. Readers interested in finding out more about Gorton’s research in big data can also view the following additional resources: Webinar: Software Architecture for Big Data Systems Podcast: An Approach to Managing the Software Engineering Challenges of Big Data Podcast: Four Principles for Engineering Scalable, Big Data Systems Blog Post: In the blog  post, Addressing the Software Engineering Challenges of Big Data, Gorton describes a risk reduction approach called Lightweight Evaluation and Architecture Prototyping (for Big Data) that he developed with fellow researchers at the SEI. The approach is based on principles drawn from proven architecture and technology analysis and evaluation techniques to help the Department of Defense (DoD) and other enterprises develop and evolve systems to manage big data.  Blog Post: In the blog post, Principles of Big Data Systems: You Can’t Manage What You Don’t Monitor, Gorton takes a deeper dive into one of the four challenges that he enumerated in his post, namely, you can’t manage what you don’t monitor.  5. A New Approach to Prioritizing Malware Analysis  Every day, analysts at major anti-virus companies and research organizations are inundated with new malware samples. From Flame to lesser-known strains, figures indicate that the number of malware samples released each day continues to rise. In 2011, malware authors unleashed approximately 70,000 new strains per day, according to figures reported by Eugene Kaspersky. The following year, McAfee reported that 100,000 new strains of malware were unleashed each day. An article published in the October 2013 issue of IEEE Spectrum, updated that figure to approximately 150,000 new malware strains. Not enough manpower exists to manually address the sheer volume of new malware samples that arrive daily in analysts’ queues.  CERT researcher Jose Morales sought to develop an approach that would allow analysts to identify and focus first on the most destructive binary files. In his April 2014 blog post, A New Approach to Prioritizing Malware Analysis, Morales describes the results of research he conducted with fellow researchers at the SEI and CMU’s Robotics Institute highlighting an analysis that demonstrates the validity (with 98 percent accuracy) of an approach that helps analysts distinguish between the malicious and benign nature of a binary file. This blog post is a follow up to his 2013 post Prioritizing Malware Analysis that describes the approach, which is based on the file’s execution behavior. Readers interested in learning more about prioritizing malware analysis, should listen to the following resource:  Podcast: Characterizing and Prioritizing Malicious Code  7. A Generalized Model for Automated DevOps (DevOps)9. An Introduction to DevOps (DevOps) In June, C. Aaron Cois wrote the blog post A Generalized Model for Automated DevOps where he presents a generalized model for automated DevOps and describes the significant potential advantages for a modern software development team.    With the post An Introduction to DevOps, Cois kicked off a series exploring various facets of DevOps from an internal perspective and his own experiences as a software engineering team lead.  Here’s an excerpt from his initial post:  At Flickr, the video- and photo-sharing website, the live software platform is updated at least 10 times a day. Flickr accomplishes this through an automated testing cycle that includes comprehensive unit testing and integration testing at all levels of the software stack in a realistic staging environment. If the code passes, it is then tagged, released, built, and pushed into production. This type of lean organization, where software is delivered on a continuous basis, is exactly what the agile founders envisioned when crafting their manifesto: a nimble, stream-lined process for developing and deploying software into the hands of users while continuously integrating feedback and new requirements. A key to Flickr’s prolific deployment is DevOps, a software development concept that literally and figuratively blends development and operations staff and tools in response to the increasing need for interoperability. The following resources are available to readers interested in learning more about DevOps:  A New Blog Post Series: In November, Cois and other researchers in his group launched a new blogpost series that offers guidelines and practical advice to organizations seeking to adopt DevOps. A new post is published every Thursday.  Podcast: DevOps—Transform Development and Operations for Fast, Secure Deployments  8. Writing Effective Yara Signatures to Identify Malware  In previous blog posts, David French has written about applying similarity measures to malicious code to identify related files and reduce analysis expense. Another way to observe similarity in malicious code is to leverage analyst insights by identifying files that possess some property in common with a particular file of interest. One way to do this is by using YARA, an open-source project that helps researchers identify and classify malware. YARA has gained enormous popularity in recent years as a way for malware researchers and network defenders to communicate their knowledge about malicious files, from identifiers for specific families to signatures capturing common tools, techniques, and procedures (TTPs). In this latest post, Writing Effective Yara Signatures to Identify Malware, which continues to draw a robust audience since its publication in 2012, he provides guidelines for using YARA effectively, focusing on selection of objective criteria derived from malware, the type of criteria most useful in identifying related malware (including strings, resources, and functions), and guidelines for creating YARA signatures using these criteria. Here’s an excerpt:  Reverse engineering is arguably the most expensive form of analysis to apply to malicious files. It is also the process by which the greatest insights can be made against a particular malicious file. Since analysis time is so expensive, however, we constantly seek ways to reduce this cost or to leverage the benefits beyond the initially analyzed file. When classifying and identifying malware, therefore, it is useful to group related files together to cut down on analysis time and leverage analysis of one file against many files. To express such relationships between files, we use the concept of a "malware family", which is loosely defined as "a set of files related by objective criteria derived from the files themselves." Using this definition, we can apply different criteria to different sets of files to form a family. The following resource is available to readers interested in learning more about this work:  Research Report: Function Hashing for Malicious Code Analysis, CERT Research Report, pp 26-29 Wrapping Up 2014 and Looking Ahead  This has been a great year for the SEI Blog. We plan to take a break in publication for the remainder of 2014, but we will kick off 2015 with a series of great posts:  We will kick off a series of posts highlighting the SEI’s technical strategy for 2015 and beyond.  Will Klieber and Lori Flynn will update their continuing research on secure coding for the Android platform.  Sagar Chaki and James Edmondson will detail their research on software model checking for verifying distributed algorithms.  Aaron Cois will continue his series on DevOps with posts on secure and continuous integration.  As always, we welcome your ideas for future posts and your feedback on those already published. Please leave feedback in the comments section below.  Additional Resources Download the latest publications from SEI researchers at our digital library http://resources.sei.cmu.edu/library/. 
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:39pm</span>
Employees spend more than one third of their day at work - for some it can be as much as two-thirds. That's at least one meal a day-- and the vast majority of their waking hours. Given all of the time we spend at work, employers and employees coming together to create a healthy work environment benefits everyone. There is also a business case for workplace wellness. Nearly 1 in 5 Americans have a diagnosable mental health condition each year and many others are at risk. In fact, nearly half of all Americans will...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:39pm</span>
Nic Harrison - Director of Enabling Digital Delivery I’m Nic Harrison, the Director of Enabling Digital Delivery, part of DWP’s Business Transformation Group. I’ve been with the department for a year, working with Kevin Cunnington to put in place the organisation and plans needed to deliver a once in a lifetime transformation for our citizens, third party partners and staff. I used to live in Dallas, Texas where there is a good ‘ole boy saying: "fixin’ to get ready". This means doing all the necessary preparation before a big task can start. I really feel 2014 was about "fixin’ to get ready" for the huge year we have in front of us in 2015. Last year we put in place Digital Academies to train our staff, we recruited a number of specialists to provide leadership and guidance in new technology and techniques (take a bow Ben Holliday, my colleague and Head of User Experience) and we worked with the change programmes to ensure consistency and promote re-use across the department. In short, we put in place the foundations on which to build the new DWP. The enabling part of Enabling Digital Delivery is about making sure we have robust designs that deliver great, user centred services. This is often more about the parts of a service that cannot be delivered with technology than the parts that can be. Really good design puts human intervention where it is needed and valuable and it automates whenever we can do that safely and securely. That’s why we look at user journeys and test and learn around real user experiences. However when we can automate, we need first class 21C technology to support us. That’s why I am delighted we have our new Director General for Technology, Mayank Prakash,  in the department. We are already drawing on expertise from the Chief Technology Office domain architecture function in order to understand the "art of the possible" from our current and future IT estate. I am sure that Mayank’s vision for the department’s technology will bring new and exciting opportunities to leverage technology and deliver innovative services. After "fixin to get ready" last year, we are now really getting going with the momentous task of transforming DWP together.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:38pm</span>
By John HallerSenior Member of the Technical Staff CERT Division Attacks and disruptions to complex supply chains for information and communications technology (ICT) and services are increasingly gaining attention. Recent incidents, such as the Target breach, the HAVEX series of attacks on the energy infrastructure, and the recently disclosed series of intrusions affecting DoD TRANSCOM contractors, highlight supply chain risk management as a cross-cutting cybersecurity problem. This risk management problem goes by different names, for example, Supply Chain Risk Management (SCRM) or Risk Management for Third Party Relationships. The common challenge, however, is having confidence in the security practices and processes of entities on which an organization relies, when the relationship with those entities may be, at best, an arms-length agreement. This blog post highlights supply chain risks faced by the Department of Defense (DoD), federal civilian agencies, and industry; argues that these problems are more alike than different across these sectors; and introduces practices to help organizations better manage these risks.   Protecting Key Assets In the past, when government or business invested in a piece of machinery, appliance, or service, it could more or less expect the item to function as advertised. Checks and balances (such as licenses, warranties, regulations, legal recourse, and supplier reputation) reasonably ensured against defects or service failures. Unfortunately, such controls seem increasingly inadequate when applied to global supply chains for the complex information and communications technology—and technology-based services—that underpin critical capabilities in most organizations, especially in mission- and safety-critical operations in the US government and DoD. Concerns about supply chain risk management in ICT include the possibility that counterfeit or maliciously tainted hardware and software might be used by an acquiring organization to its detriment. Also, organizations often face uncertain risks because of their dependence on external entities for the ongoing use and sustainment of ICT—the so-called service supply chain (Please see Supply Chain Risk Management Practices for Federal Systems and Organizations from the National Institute of Standards & Technology NIST 800-161 section 1.4, page 3 for a discussion and definition of ICT services as part of the SCRM). Supply chain risk concerns can often seem "special" or specific to a particular industry or sector. For example, healthcare institutions must ensure that business associates with whom they share private health information will protect that information. Similarly, the defense sector has concerns about verifying the trustworthiness of subcontractors with which they may share sensitive weapons system information. In each case, however, the essential problem is the same. The organization has key assets—financial account information, private health information, or defense systems information— that must be protected for the organization to be successful. When the organization relies on a supply chain, it is forced to depend on processes, capabilities, and actions outside its direct control for that protection. External dependencies can range from contracts with cloud-service providers for data storage, to reliance on public infrastructure. As part of its work on critical infrastructure cybersecurity, researchers in the SEI’s CERT Division seek to help organizations by providing common ways to assess and improve external dependency management across the entire lifecycle of external entity relationships. The lifecycle includes selecting suppliers and vendors and conducting initial risk assessments, managing ongoing relationships, and planning and conducting the incident response activities needed if the organization experiences a disruption involving the external entity. Different critical infrastructure and government sectors refer to the risk of depending on external entities to support key services by different names. The DoD frequently uses the term supply chain risk management to refer to concerns about the integrity of hardware and software, while the financial community is facing increasing scrutiny over "third-party risk." CERT researchers advocate "external dependency management" as a broad term for management activities to control the risks of these relationships. The Realities of Managing Suppliers and Dependency Risk Managing dependence on external entities is challenging because it is hard to verify the trustworthiness of suppliers’ security practices and processes across arm’s-length relationships. Typically, the most basic step taken to control risk involves the codification of security requirements into contracts and other formal agreements. However, contracts can be of limited use because of uncertainty around contractual duties, the difficulty of proving breach involving complex ICT systems, or the rate of technological change.  Organizations may also simply not have an ability to really negotiate security requirements, or it may be unrealistic to expect a particular vendor to meet a very high level of cybersecurity. Other approaches often used to mitigate this problem are simply ineffective in certain threat environments. For example, asking a vendor to complete a checklist is often unsatisfactory because either this activity does not capture the context of the particular relationship, or it only captures the state of affairs at one point in time. By contrast, building a strong relationship from the earliest stages of the supplier lifecycle can help to build communication essential to managing dependency risk. Trust can be built over time to improve communications, recognizing that suppliers’ business and resource constraints drive their actions. A sometimes overlooked—but very basic—challenge involves gathering information and establishing trusted communication with suppliers. Financial services companies, for example, are often connected to a wide array of suppliers needed for payments, clearing and settlement, data processing, communications and so on. The resource demands of managing multiple suppliers across organizational boundaries can be daunting. Organizations should start by having a good process to identify and prioritize the critical few external entities. Having identified and prioritized dependencies, the next section of this post will explore the development of requirements for those entities. Building External Dependencies Management Practices Establishing requirements for external entities (See also, Office of the Comptroller of the Currency Guidance on Risk Management Lifecycle.) is a foundational, essential aspect of managing dependency risk.  Requirements are largely driven by the need to protect and sustain assets used to support high-value services. Requirements may also support regulations or corporate policy. For example, HIPAA requires the protection of health information, a requirement that specifically extends to business associates as defined under the law.  The criticality of the service and the importance of the supplier to that service drive the requirement, for example protecting information or ensuring continuous availability. Requirements provide the basis for prioritizing and managing external entity relationships.   Of course, well-defined requirements will not change the degree to which the organization can control suppliers or drive their behavior.  When organizations have little meaningful control over third parties, they must employ practices and strategies that can be controlled internally. Management practices may be characterized into three categories: internal practices that can be executed and directly controlled by an organization, such as using multiple vendors, limiting the type of information external suppliers are allowed to process, or accounting for the external dependency as part of response planning  external practices that the organization requires the supplier to perform such as encrypting data, monitoring network access, or testing of business continuity plans cooperative activities that organizations can do in collaboration with suppliers such as conducting joint assessments of controls or the sharing cyber-threat data Wrapping Up and Looking Ahead In the face of increasingly sophisticated and frequent cybersecurity attacks, organizations can use a mix of internal, external and cooperative controls to help manage risks and meet requirements. Often, as trust evolves with suppliers over time, it is possible to refine the mix of supplier management strategies and build collaborative approaches to managing risks.  In the case of public and shared suppliers, the use of cooperative risk management strategies can be one of the most effective means of managing risk. The DoD-Defense Industrial Base Collaborative Information Sharing Environment (DCISE) is one example of how this is currently being done, as is the Department of Homeland Security's National Cybersecurity Communications and Integration Center. It has become increasingly evident that increased collaboration including the sharing of information is needed to help organizations protect critical resources. For many organizations, relationships with partners and other outside entities are the predominant way they learn about incidents, rather than internal technical monitoring (See page 53 of the 2013 Verizon Data Investigation’s Report). External dependency management is more similar than different across the public and private organizations that underpin Americans’ security and economy. Organizations need opportunities to improve their capability and learn from one another. For these reasons, CERT is sponsoring a Supply Chain Risk Management Symposium on January 15, 2015 in Arlington, Va. Additional Resources For more information or to register for the CERT Suply chain Risk Management Symposium, please visithttp://www.cert.org/scrm. To view the webinar Lessons in External Dependency and Supply Chain Risk Management featuring CERT researchers John Haller and Matthew Butkovic, please visithttps://www.webcaster4.com/Webcast/Page/139/5934.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:38pm</span>
  We’ve all done it.  There’s no need to be ashamed.  Heck!  I even have a ritual around it.  I shut my office door.  I close the blinds.  I play soothing music and just about when Elton John belts out, "Blue jean baby!"…wait, that’s not what I meant (get your minds out of the gutter).  I use this relaxing setting to check Glassdoor and see what is being said about my organization.  Like you I dread the scathing review of HR and how we literally handle our business.  Don’t get me wrong.  I am not too concerned about the reviews...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:38pm</span>
By Suzanne Miller Principal ResearcherSoftware Solutions Division This blog post is the sixth in a series on Agile adoption in regulated settings, such as the Department of Defense, Internal Revenue Service, and Food and Drug Administration. "Across the government, we’ve decreased the time it takes across our high-impact investments to deliver functionality by 20 days over the past year alone. That is a big indicator that agencies across the board are adopting agile or agile-like practices," Lisa Schlosser, acting federal chief information officer, said in a November 2014 interview with Federal News Radio. Schlosser based her remarks on data collected by the Office of Management and Budget (OMB) over the last year. In 2010, the OMB issued guidance calling on federal agencies to employ "shorter delivery time frames, an approach consistent with Agile" when developing or acquiring IT. As evidenced by the OMB data, Agile practices can help federal agencies and other organizations design and acquire software more effectively, but they need to understand the risks involved when contemplating the use of Agile. This ongoing series on Readiness & Fit Analysis (RFA) focuses on helping federal agencies and other organizations in regulated settings understand the risks involved when contemplating or embarking on a new approach to developing or acquiring software. Specifically, this blog post, the sixth in a series, explores issues related to system attributes organizations should consider when adopting Agile. A Framework for Determining Agile Readiness Many organizations, especially in the federal government, have traditionally utilized a waterfall lifecycle model (as epitomized by the engineering "V" charts) for software development. Programming teams in these organizations are accustomed to being managed via a series of document-centric technical reviews, such as design reviews. These reviews focus on the evolution of artifacts that describe the requirements and design of a system rather than its evolving implementation, as is more common with Agile methods. Because of this significant difference in focus, many organizations struggle to adopt Agile practices and find it hard to prepare for technical reviews that don’t account for both implementation artifacts and the availability of requirements and/or design artifacts that are at different levels of abstraction. On the other hand, some organizations are surprised to discover they are already performing some of the practices of Agile methods, which can ease Agile adoption. The method for using RFA and the profile that supports CMMI for Development adoption are found in Chapter 12 of CMMI Survival Guide: Just Enough Process Improvement. Adopting new practices like those found in CMMI models involves adoption risk, as does the adoption of many other technologies. I first used RFA in the 1990s to identify adoption risks for software process tools. Since that time, I have used RFA to profile various technologies, including CMMI and, now, Agile. For the past several years, the SEI has researched adoption of Agile methods in U.S. Department of Defense (DoD) and other government settings. SEI researchers have adapted the RFA profiling technique to include typical factors related to adopting Agile methods for any setting. We have also focused on other factors more uniquely associated with adopting Agile methods in highly regulated government acquisition environments, such as the DoD and Department of Homeland Security. To date in this series, we have characterized four of the six categories to profile for readiness and fit: business and acquisition (discussed in the first post) organizational climate (discussed in the second post and continued in the third post) project and customer environment (discussed in the fourth post) practices (discussed in the fifth post) system attributes (discussed in this post) technology environment (will be discussed in the next post) Categories and factors continue to evolve as we pilot the analysis in client settings, but these six listed above are the ones we are currently using. Applying the Readiness & Fit Analysis Each category of readiness and fit has a set of attributes that can be characterized by a statement representing the expected behavior of a successful Agile project or organization operating in relation to that attribute. For example, an attribute from the system attributes/technology category is stated as follows:Loosely-coupled architecture. Product architecture allows for at least some of the components to be produced independently (architecture reflects loose coupling). Application Notes: At the beginning of an Agile adoption project, organizations are often uncertain about their current state in terms of adoption factors or the importance of individual factors (such as alignment of oversight practices with Agile practices) to organizational adoption success. Later in the process, an RFA can highlight adoption risk areas that were overlooked during earlier phases of adoption. Using the example above, an organization may have an architecture already in place when it considers adopting Agile practices that is not loosely-coupled and contains many dependencies. This will make it harder to create independent slices of functionality. There will be inherent rework because of architectural dependencies as the system evolves.  After one or two pilots, however, the impacts of the tightly coupled architecture may motivate a strategic pause to explicitly rework the architecture to more readily accommodate incremental, iterative Agile approaches.  After that re-architecting, this would no longer be an area of adoption risk, and the organization can move on to dealing with other issues. The key point here is to be prepared to apply RFA principles and techniques at multiple points in your adoption journey. The remainder of this post is dedicated to the System Attributes category. The System Attributes Category System attributes cover technical aspects of the product under development and determine whether work benefits from division into smaller chunks that can be produced iteratively and built upon over time. If the program cannot be broken into loosely coupled, smaller chunks, then it will be hard, if not impossible, to produce "slices" of functionality that can stand alone and provide usable functionality at the end of each iteration. Each iteration in an Agile development effort ideally produces potentially shippable code (not always to external customers, it could be for internal customers). This production of usable functionality frequently means that someone could deploy and use the code to achieve some tangible benefit prior to the entire product’s completion. As in other RFA categories, the following list has both a tag (a short title that summarizes the statement) and a statement that provides a condition or behavior one would expect to find in an organization with successful engineering and management methods consistent with Agile principles, as published in the Agile Manifesto. Loosely-coupled architecture. Product architecture allows for at least some of the components to be produced independently (architecture reflects loose coupling). From an Agile perspective, a loosely-coupled architecture has two implications. First, in an Agile environment, each iteration should yield potentially shippable code. With a loosely-coupled architecture, a component can be produced independently, and it should be easier to produce something usable within one iteration. Second, in today’s environment, especially for large DoD programs, the staff is distributed across multiple locations. While modern communications supports the use of Agile methods in these distributed environments, it is easier for one team to be responsible for an independent module with loose coupling than for a module with many interfaces requiring significant communication and collaboration. System supports iterative delivery. System or service type is compatible with an incremental release and delivery strategy. The third Agile principle states, "Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale." If the strategy for the system or service type is based on a strategy that requires a plan-driven, documentation-centric approach—complete with multiple milestones, large reviews, and "big bang" integration—then the shorter iterative approach will be incompatible and potentially cause additional work and many disconnects throughout the development. Critical dependencies accounted for. Mission and safety-critical components and dependencies are identified and accounted for in the program strategy. Agile programs deliver working software frequently. If mission and safety-critical components and dependencies are not identified and accounted for in the program strategy, however, the software could easily be incomplete and cause significant rework. In fact, the ninth Agile principle states, "Continuous attention to technical excellence and good design enhances agility."  This principle means that the critical components and dependencies should be considered up front. This principle also relates to the importance of paying close attention to architecturally-significant requirements as part of the early activities. Some programs deal with this in a "Sprint Zero" before the actual software development begins. Others use the concept of an architectural runway to ensure that dependencies and interfaces are understood to a useful level before implementation decisions are made. Security requirements accounted for. Security drivers are accounted for in the program strategy. As with critical dependencies, continuous attention to technical excellence and good design requires the developers to consider security requirements up front and throughout development. This consideration is particularly true with DoD or other complex programs (also found in healthcare, medical devices, and financial systems, to name a few) that have complex security requirements. Failure cost accounted for. Cost of failure is understood and accounted for. Failure can take on many forms—working software that does not deliver the needed functionality, late software deliveries, or software development that is incompatible with the rest of the organization. The 12 Agile principles counter these issues. However, the organization must take into account Agile development activities with relationship to the overall project and to the organization’s culture. If these relationships are not in sync, the overall Agile project could be in jeopardy. Appropriate criticality. Criticality of the software in meeting business or mission goals is addressed for the program. Many teams embrace Agile methods because the software is needed quickly or provides a competitive advantage. If the criticality attribute of the requirements is not prioritized appropriately, however, even software delivered early and often may not provide the required functionality. Sound engineering principles are still required when employing Agile methods, as emphasized in the first part of the ninth Agile principle "Continuous attention to technical excellence…" The system attributes category may be smaller in terms of the number of attributes addressed, but is an important area for successful Agile adoption. The next post in the series will present the final category in this series on RFA, technology environment, and complete our whirlwind tour of the six categories in the Agile Adoption in DoD RFA model. I look forward to hearing your experiences adopting Agile methods, especially those operating in regulated environments. Please leave a comment below or send an email to info@sei.cmu.edu. Additional Resources The SEI technical note, Agile Methods: Selected DoD Management and Acquisition Concerns, outlines many of the project and customer environment issues that arise in Agile adoption in the DoD. To download the report, please visithttp://www.sei.cmu.edu/library/abstracts/reports/11tn002.cfm. Some of the issues related to project and customer environment challenges are also detailed in the October 2013 SEI technical note, Parallel Worlds: Agile and Waterfall Differences and Similarities, which can be downloaded athttp://resources.sei.cmu.edu/library/asset-view.cfm?assetID=62901. I am recording a series of podcasts with Mary Ann Lapham exploring the real-world application of Agile principles in the DoD. To view the series or download episodes, please visithttp://www.sei.cmu.edu/podcasts/agile-in-the-dod.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:37pm</span>
Displaying 29329 - 29352 of 43689 total records