Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

By Aaron VolkmannSenior Research EngineerCERT Cyber Security Solutions Directorate This post is the latest installment in a series aimed at helping organizations adopt DevOps. The DevOps movement is clearly taking the IT world by storm. Technical feats, such as continuous integration (CI), comprehensive automated testing, and continuous delivery (CD) that at one time could only be mastered by hip, trendy startups incapable of failure, are now being successfully performed by traditional enterprises who have a long history of IT operations and are still relying on legacy technologies (the former type of enterprises are known in the DevOps community as "unicorns," the latter as "horses"). In this post, I explore the experience of a fictional horse, Derrick and Anderson (D&A) Lumber, Inc., a company that hit some bumps in the road on its way to DevOps. As D&A finds out, a DevOps transformation is not a product that can be purchased from the outside, but rather a competency that must be grown from within. D&A is a retail company with 250 stores making a net profit of $210 million annually. D&A’s IT operations have grown organically since their humble beginnings in the early 1980s, relying on a proprietary hardware and software vendor for their point-of-sale (POS) and inventory systems. Due to the long history of incremental upgrades from the vendor and hundreds of custom modifications and bolt-on programs developed by in-house development staff, D&A’s systems were becoming increasingly hard to maintain. Deployment of software updates proved especially challenging due to a complex nightly batch schedule that synchronized data between the remote store locations and the central office. As numerous organizations did in the early 2000s, D&A hired a fresh crop of engineers to develop and maintain the company’s web presence. The website grew to become integrated with the proprietary legacy POS and inventory system. Many features that were added to the web site were jointly developed by the web and legacy backend development teams. The website would call custom programs running on the legacy system to exchange data between the two platforms. Over the course of the next several years, adding new features to the website became painful due to the need to update programs running on the legacy system. Moreover, the website was becoming slower and slower as performance was constrained by calls to the legacy system. The marketing department was continually requesting new features, such as complex online ordering and a mobile offering. The web team found that it could only perform updates to the web site every six months. Deployment at D&A typically took from the close of business Friday until Sunday afternoon because the team needed to allow the nightly batch processes to complete for Friday, back up the system, deploy the updates, and manually verify that the changes were successful. Any misstep caused a cascading effect in the batch processes, where missed days of data had to be manually loaded by IT support staff the following Monday. It was common for lingering issues from the upgrade to not be completely resolved until halfway through the next business week. D&A needed to do something differently. The CIO caught on to the buzz in the industry about the DevOps movement and knew this was the change that D&A needed. The company hired a DevOps consulting firm to work with its web team to analyze the company’s web application and implement a plan to improve its deployment. The consulting firm recommended purchasing and implementing a software package from a partner company to do CI and CD, which would improve D&A’s software quality and help speed up deployment of its website. The consulting company helped D&A’s operations team stand up the DevOps package and configured it to build the web application upon every check-in to the source code repository. Several benefits occurred from these changes. For example, productivity was improved because the development team received feedback within minutes of committing bad code that broke the build. The DevOps firm also developed a process for automating deployment of the web application that allowed IT operations staff to deploy the built code to production with the press of a button. Throughout the engagement with the DevOps consulting firm, the legacy team was too busy with a major software and hardware upgrade to participate in the project. This lack of involvement was brought to the attention of the CIO by the web team’s technical manager, but it was mandated that the legacy team must not be distracted from completing the upgrade on schedule. Besides, the consultants only had expertise in Linux and Windows systems and no knowledge of D&A’s proprietary legacy system. Two months passed as the web and legacy teams developed and tested the next set of new website features that allowed customers to request a special coating on their lumber products. The new capability was already being talked about in trade magazines and touted by D&A’s marketing department to the press. The time came to release the website updates along with the corresponding updates to the legacy system at retail store locations. The automated website deployment completed quickly and successfully as expected. Deployment of the legacy system update, however, did not go as smoothly. About half of the remote locations’ servers did not come back online after the update due to an operating system hotfix that was not applied consistently across all store locations. The systems that received the hotfix worked, but others hadn’t hung because of a new system call introduced with the custom software update to support the website enhancement. These servers required manual intervention that took close to an entire day to remediate. These store locations were missed during the next day’s batch schedule, and the usual Monday morning fire drill was on as VPs complained that their data warehouse reports looked wrong due to missing data for several locations. The CIO ordered a detailed analysis of the incident. The company expected its investment in DevOps to speed up deployment of the website. The website was deployed quickly and smoothly, but the complex legacy backend systems remained a bottleneck to the whole process. D&A missed the opportunity to properly remediate its deployment problems when engaging the DevOps consulting firm. This inability resulted in hundreds of staff hours wasted in cleaning up the most recent upgrade fiasco that would have been better spent improving operations on the legacy side. Over the next few months, the legacy and web teams got together and explored re-architecting the integrations between the legacy systems and the web front end. Both teams found that they could de-normalize their data by ascertaining on-hand quantities of products at the remote locations. They accomplished this by subtracting their sales from their orders and storing them in a relational database at the central office instead of taxing the legacy system at the stores. This new process was separate from day-to-day retail operations and could be updated at any time without risking the legacy system’s availability. They also began to use a new Java-based service layer being offered by the legacy system vendor. This layer alleviated the need for the legacy team to deploy custom code every time a small change was needed to the required web interfaces. The legacy team automated the process of deploying software updates to all their test and production systems to assure they had identical software on them. This automation assured that what’s been tested in the test environment would work in production. Automation also helped them avoid large deployments that could take an entire weekend to complete It by allowing them to stagger the updates of legacy systems throughout the week without affecting production operations. D&A learned that each technology area within the enterprise is unique in its constraints and capabilities. The website, which could be deployed quickly and easily through automation, was constrained by its dependence on the legacy backend system that was much slower to change. By hiring an outside consultant who focused on the lowest hanging fruit, their initial DevOps venture missed the mark. The key lesson from this case study is that broad transformation of enterprise technology must come from within the organization, using the expertise and knowledge of internal subject matter experts who can navigate the maze of systems integrations often in place. These transformations can be time intensive, hard, and initially painful, so these initiatives can only succeed with senior leadership’s full support and investment. The best kind of DevOps is not bought, but learned. Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js. To read all the installments in our DevOps series, please click here or on the individual posts below. An Introduction to DevOps A Generalized Model for Automated DevOps A New Weekly Blog Series to Help Organizations Adopt & Implement DevOps DevOps Enhances Software Quality DevOps and Agile What is DevOps? Security in Continuous Integration DevOps Technologies: Vagrant DevOps and Your Organization: Where to Begin DevOps and Docker Continuous Integration in DevOps ChatOps in the DevOps Team DevOps Case Study: Amazon AWS DevOps Networking Solutions  
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:28pm</span>
  The date was March 5, 1987.  My mom was in Northbrook, Illinois beginning her training as an Allstate agent for what seemed like a never-ending three weeks. I was eleven years old and staying with my grandmother.  I loved my grandmother.  She was the best grandmother ever.  This is mostly because she made me guava-paste-and-cream-cheese crackers every night much to my mother’s chagrin. Her greatness was cemented as she fell asleep while I sugared up. She didn’t even bat an eye when I stayed up past 1AM watching late night television.  In one fell swoop I was mystified by...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:28pm</span>
Michelle Dyson, Director, Business Transformation Group I’m Michelle Dyson, Director in the Business Transformation Group. I’m responsible for wide-ranging aspects of our transformation, including designing what the future DWP is going to look like and how we get there. Why do we have to transform? Because the expectations of our customers are increasing all the time. They (and we - we are all likely to be DWP customers during our lifetimes) expect to be able to interact with public services as they interact with all other services. And because we expect continued pressure to cut costs. If we get it right, transformation should help us to both improve customer service and to cut costs. We have a vision for what a transformed DWP should look like. We also now have a high-level roadmap showing us the milestones that will take us to our vision. Like a Rubik’s cube, our roadmap has different cuts of our transformation. One cut shows the milestones of our key transformation programmes. Another cut, for example, measures our progress in delivering on the enablers for our transformation, or pillars on which our transformation is built, eg intelligent use of data. This week, we’re holding a SPRINT DWP event to share the vision for the future DWP, and bring together some of the people who will play a part in transforming the department. We’ll turn the spotlight on our roadmap, and on the flagship and lesser-known services that contribute to it, and demonstrate that we are on the journey to achieving our vision. Still a long way to go, but very real and tangible progress. We can’t do transformation on our own. We need ambassadors across DWP to make it happen. Ambassadors to talk about transformation, to bring it to life in their teams, to spot opportunities for further transformation, to spot opportunities for join up including across government, and to be creative in removing barriers. SPRINT DWP. A great opportunity to demonstrate what we have achieved, inspire confidence in the next stages of our transformation and engage people in delivering it.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:28pm</span>
Today I presented our 2020 vision at the last of the Spring 2015 Sprint DWP events in Leeds Civic Hall. In the presentation I covered the "perfect storm" of pressures and opportunities for the department, and the need for a joined-up and evolving Business Design for the future, if we are to make the most of the opportunity. I explained the idea of High-Touch and Low-Touch services, to deliver the policy intent in a modern and efficient way. I finished by explaining the design process and the critical enablers in the business design, and the way we are bringing these into reality.  My requests for the attendees were that they should feel able to challenge work that doesn't join up, pass on this story, and recognise the vision is evolving. Several people have asked us to share the full presentation, so alongside our internal communications, here is what I presented in full.  (10 minute read)   Our transformation journey My presentation at today's Sprint DWP event in Leeds Civic Hall has three main aims: give some more detail about our vision for the future of the department; share some of the ideas that are behind that vision; and give some highlights of what we’re doing now to make it into a reality. Our transformation is now well underway and it’s gathering momentum.  We are past the point of asking ourselves if this is something we should be doing, and starting to get into the nuts and bolts of what is it really going to look like and how DWP is going to get there. We are in a "perfect storm" of external pressures on the department, including citizens’ expectations of us and the financial pressure we’ll face during the next parliament.  But we also have a huge opportunity through the change programmes that we are running, and our technology that needs rebuilding. If we are to get this right and make the most of this opportunity, we need to have a clear vision for where we’re trying to get to.  And we will need the help of everyone at Sprint DWP to direct all of our change efforts towards making it a reality.  For that to work, we need a joined-up Business Design to work towards, and it has to be both ambitious and achievable. The starting point Just after I joined DWP we held a Digital Transformation Group conference in Sheffield, back in April last year.  In the panel session I got asked "what does a great year look like?" for our business transformation.  That’s always a great question to ask (and we should keep asking one another). My answer was that it would be a great year if we could get to agreeing a much clearer view of what DWP would look like in the future, and start to align our change programmes so that they know which pieces of that vision they are building. So the good news is that we do have a much clearer view of our future, and we’ve produced a first version of a joined-up roadmap across the department that starts to show how the moving parts fit together. As you’d expect with a challenge of this scale, it hasn’t all been plain sailing! If everyone loves your idea, then you should worry it’s not forward-thinking enough. The picture Last summer we created a picture and a video to explain our transformation journey.  The photo shows us having a conversation about it in the business transformation slot at the Digital Academy.  We ended up showing it to about 1,000 people, and got lots of feedback that it was really helpful, and also lots of great suggestions for how to improve it. Great conversations about our future and the transformation journey at #DWPdigitalacademy today @DigitalDWP pic.twitter.com/lrZ9Ek2j2N — Andrew Besford (@abesford) July 22, 2014 We have come a long way since then, and we have worked out so much more detail.  So we wanted to create a new version that we could share even more widely around DWP to help bring our transformation story to life. And I’m delighted that we’ve unveiled the new version of the video today.  It was still "cheap & cheerful", but I am really proud of where we have got to - the team have done a great job of bringing people together from all over DWP to create a positive and exciting vision of the future. We’re also really excited that the video has a ‘superstar’ voiceover for the first time - featuring members of our Executive Team. Kevin Cunnington and Andrew Besford with our vision for DWP in 2020 A key message in the 2020 vision is that there are some things we do at the moment to deliver our services, which we don’t have to do that way in the future.  We have the opportunity to deliver a better service for our customers, and do it in a more modern and efficient way for our taxpayers. The customer experience layer describes DWP’s customer proposition in the future, such as "Better use of existing data", "More automation where it’s safe".  Those are the words which have been introduced across DWP in the last two rounds of our internal DWP Story events, so they should be increasingly familiar to our people. One of the aims of the picture is to explain how we organise ourselves in future to deliver that customer experience, across our people, technology and ways of working.  We want to be clear that it’s not just about building websites, but about our ambition for the way the whole of the business operates. High-Touch Low-Touch thinking A key part of the underlying thinking is about us making explicit choices between using "High-Touch" and "Low-Touch" ways of delivering our services. We’re saying "High-Touch" to refer to the parts of our services where we need to use manual interventions to operate the business.  That could be face-to-face, on the phone, or carrying out a back-office process.  When we do things in a High-Touch way, we are investing the efforts of our people, and that comes at a cost, especially when we factor in overheads like the buildings those people are in. We might choose to use High-Touch because of a person’s health condition, or we might do it because our data shows it will reduce the potential for fraud.  Either way, complicated cases become easier if our front-line people are presented with the right information, to help them give the best advice. The opposite of that is "Low-Touch", which refers to the parts of our services we deliver without human intervention.  Low-Touch means a simplified, automated way to achieve the same outcome, which is quicker or cheaper, maybe both.  DWP already has many Low-Touch activities today, like automated processing of payments. But at the moment we sometimes have to ask customers for information which they have already provided to us. Why would we do that if we were able to make smart use of information we already have, so that we can carry out the process automatically?  That’s better for our customers, and it’s better for DWP. We need to recognise the differences between High-Touch and Low-Touch, and we need to knowingly choose which is right at any stage to deliver services to customers. In our vision we want to systematically use Low-Touch whenever we can. One of the ways that we’re taking this to action now is that we’re looking at areas where we can build a proof-of-concept for High-Touch Low-Touch so that we can demonstrate it working for us. Is this realistic?  Technologies and ways of working that are now commonly used in all sorts of organisations give us the opportunity to continuously build up data and use it throughout a customer’s journey with us, in a way which hadn’t been envisaged back when most of DWP’s current IT was created.  So the focus is on making safe decisions in individual customer experiences, and also building up a big picture from the data, to make sure we’re making the right decisions in general, and to help us continuously improve. That’s why we’ve put our intelligent use of data story right through the middle of the 2020 picture, because it's now becoming central to the way we deliver our services.  And having the right technology and data science skills will be crucial. Continuously evolving the operating model Another important feature of this vision is that it’s a user-centric organisation that is constantly exploring and searching for what works, and making those ideas part of the way we operate the service for everyone.  In that sense I describe the Business Design as an "Evolving" Operating Model, rather than a static Target Operating Model. So this is not a set-in-concrete, linear description of exactly how the organisation will work in 2020.  We’re laying out the key moving parts of DWP in the future, and for some of these we’ll only build up the detail through many years of adapting and improving, working closely with our change programmes which inform, and are informed by, the overall design. Analytics and User Research in our 2020 Vision This is why it’s critical that we have our User Researchers understanding what our customers need from us.  We’ll know more about our users than we’ve ever known before, and this can help us make life easier for them, and for ourselves.  So User Research will remain a central feature of DWP in 2020, and already today we have BTG’s User Researchers working as part of the programmes. Thinking about the future costs to operate the business, how do we know that this is a credible story that will make sense from a financial perspective?  Our analysts have built confidence through their work, by looking at the opportunities to improve our current customer journeys with High-Touch Low-Touch thinking.  And of course our Analytics team will also play an important role in DWP in 2020, which is why they’re also shown here. The Business Design process I’ve covered why we need a Business Design, and some of the important features of it, but I also wanted to say a few words about the design process we’ve been going through to get to this point. This is "The Squiggle of the Design Process" by a designer called Damien Newman, which he uses to explain how things are uncertain in the beginning of a design process but become increasingly clear. The Squiggle of the Design Process, by Damien Newman  (CC BY-ND 3.0 US) At the start there was lots of uncertainty and it was very difficult to make a plan for how we were going to get there.  But we knew we needed to get to a reasonable idea of how the size and shape of the business will change over time.  For example we know all this is going to need people with different skills and a different grade mix, working in different ways. To get this right we have to describe how our people, processes and technology all work together to deliver our services .  The process of doing this is as much an art as it is a science, but we’re moving along the squiggle now, and we have made sense of the different moving parts. One of the things that’s become clear in our work is that at DWP we’ve used technology for years to support admin tasks, but we can now use it to deliver services in a radically different way.  So it’s becoming clear that we are now at a turning point where technology and data become central to the way that DWP operates. We’re starting to see the detail of the business design becoming a lot more granular than the general Civil Service Capabilities Plan (in areas like Cyber Security).  This work is helping us collaborate with HR colleagues to build out that plan to get the right skills in the right places in the organisation.  Even just introducing a common vocabulary we can all share is a big help. We’re now starting to get to the clarity we need on this, and we are at the point where the concept is clear and the focus shifts to delivery. Our Enablers Our Enablers are the six critical areas where we’ve identified we need to drive a step change, to "do more".  These aren’t the only things we need to do, but they’re the ones that we have to get right.  And importantly the Enablers are Business Capabilities so we mean "the things DWP is able to do" - the combination of the technology, the people and the processes, to deliver the business outcomes. Our Enablers: six critical areas which combine technology, people/locations and processes to deliver business outcomes We’ve drawn these as being under construction in the picture to reflect that they’re building our future, and they’re already underway. For example, Universal Credit is already building important parts of our "Decision Making Based on Trust & Risk".  We’re working together to drive out the details of how this becomes the way we operate across DWP, for example in Pensions. And we’ve mobilised a piece of work that will bring together all of the different aspects of "Data", which is central to the vision, so that we have a joined-up plan for what we need to create. It’s important that we all start to understand Government as a Platform.  DWP will be part of Government as a Platform, where government will increasingly be delivering services by sharing capabilities across departments.  We already have some shared capabilities, GOV.UK is a shared publishing platform, and Verify is a shared identity assurance platform.  There’s also the Performance Platform, and the Digital Marketplace.  The Government Digital Service are actively looking at what next, and their thoughts are around areas like payments, status tracking, and address details. "Tunnelling from both ends" How are we making this all real?  Nic Harrison and I have been using the analogy for a while of "tunnelling from both ends". When we say "tunnelling from both ends" we mean that Nic has been working with all of the in-flight digital programmes to maximise reuse.  That started with areas like storage and tools, which were the immediate areas where we needed to have shared ways of delivering our services.  This work has now matured to the point where Nic’s team is building the Digital Service Centre platform for Universal Credit. So Nic has been "tunnelling forward" from the present. Meanwhile, the Business Design has been "tunnelling back" from 2020, getting that clear link through from the future customer proposition and the way we want to connect the business together to deliver those services.  And in that work we’re now at the level of detail where we can see features, like the Digital Service Centres, and start to build our plans towards those. The tunnel is nowhere near finished but we now finally have that breakthrough where we can see the work meeting in the middle, and that’s building our confidence that we have been digging in the right direction. The ask My big asks to leave everyone with are: Be clear on how your change work joins up with the big picture (and challenge anything that doesn’t) Pass on this story about what the DWP of the future can be like Recognise the vision is evolving: We’ve said "we need to do more", and we will all need to work together to help evolve and grow it.  We can't do this on our own. Keep in touch by following Andrew @abesford on Twitter.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:27pm</span>
  The mobility landscape has evolved significantly in the past five years. Businesses are increasingly looking outside of their home markets to broaden their talent pools and place key skills where they are needed most. This also means that as companies expand to beyond their home markets, talent mobility can be the key competitive differentiator for success. The World Economic Forum recently shared in their Human Capital Report for 2015 that "Talent not capital will be the key factor linking innovation, competitiveness and growth in the 21st century." This will only become more important in the future, as companies are...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:27pm</span>
By Julien Delange Member of the Technical Staff Mismatched assumptions about hardware, software, and their interactions often result in system problems detected too late in the development lifecycle, which is an expensive and potentially dangerous situation for developers and users of mission- and safety-critical technologies. To address this problem, the Society of Automotive Engineers (SAE) released the aerospace standard AS5506, named the Architecture Analysis & Design Language (AADL). The AADL standard,defines a modeling notation based on a textual and graphic representation used by development organizations to conduct lightweight, rigorous—yet comparatively inexpensive—analyses of critical real-time factors, such as performance, dependability, security, and data integrity. AADL models capture both software and hardware components, as well as their interactions, including the association of a software process on a processor or the deployment of connection on a bus. The AADL standards committee, led by my colleague, Peter Feiler, who played an instrumental role in the development of AADL, meets regularly with members from around the globe who represent a wide variety of industries, from avionics to aerospace, to discuss evolving elements of the standard and to work together on action items from prior standards meetings. In this post, we present highlights from a series of podcasts that we recorded with Feiler and four members of the standards committee discussing their real-word application and experiences with AADL. The AADL standard includes abstractions of software, computational hardware, and system components for specifying real-time, embedded, and high-dependability systems with their software/hardware concerns and their specific requirements (such as scheduling and bus latency or jitter)  validating the system and ensuring that stakeholders’ requirements can be achieved Organizations have been using the AADL standard for nearly a decade. An early adopter was the European Space Agency, which leads the ASSERT project, which eventually became The ASSERT Set of Tools for Engineering (TASTE) targeted at the design and implementation of safety-critical systems. This project relied on AADL from its inception, and project members continue to use AADL to model, validate, and produce software. From this early use of AADL, many other projects and communities have adopted the language. The AADL committee and its members tracked users’ experiences and, in response, published a new version of the AADL standard in 2009. The committee also released a minor revision in September 2012. The remainder of this post includes conversations that Feiler had with four committee members representing various industries who have adopted AADL, including aviation and aerospace. AADL and Aerospacefeaturing Myron Hecht and Peter Feiler Myron Hecht’s research at Aerospace focuses on safety, reliability, and model-based system engineering and its application to quantitative and qualitative reliability analysis methods. He has previously worked in the domains of nuclear reactors, air traffic control, and avionics. During the podcast interview, Hecht stated that AADL represents a path for enabling practitioners in the dependability community: By dependability, I mean reliability, safety, and security, to be able to interchange models and thoughts and analyses in a structured way. The problem with the field at this point is that there are a lot of good ideas, but we don’t understand each other because it takes too long to figure out what everybody is doing. The biggest problem that we have in these analyses is the unstated assumptions and unstated limitations. The sooner we get onto a standard platform where we’re all speaking the same discipline really, the sooner we can begin to make major progress in building the next generation of computerized systems, in which people’s lives may depend even more on the computer itself, without any manual intervention. I have no idea how we’re going to do driverless cars. I have no idea how we’re going to do home medical devices for critical illnesses unless we really get greater confidence in our ability to do the analyses. When asked how AADL contributed to helping Aerospace make the transition from unstated to stated assumptions and limitations, Hecht responded as follows: Well, the most important contribution that AADL made from its origination in the aircraft industry, when it was born out of an earlier project from DARPA, where they really intended to develop real-time systems for avionics applications, simply by specifying them in the design. When you work in that environment, one of the things that you’re really worried about is what happens if things go wrong. If your car radiator blows over, you can stop by the side of the road. Hecht used the first version of AADL’s error model annex, which was released in 2006, and extended some of the original tooling work by Ana Rugina to build a tool suite that helps him in his work. Hecht leveraged the AADL notations extended with reliability information to generate safety models and generates safety analysis documents from the architecture, such as failure mode and effect analysis (FMEA) reports. Well, I think what is more to the point is that they get insight into the failure behavior and the weaknesses of their system and what they might have to do to improve it. Not only that, but the sponsor of their work, who is typically not the designer himself or themselves, is going to know what’s going on. Going to be able to monitor that part of the process and program manager...So, that constant feedback between a design and analysis, which now becomes a very tightly coupled loop in a very, very rapid process, is one of the key enablers to enable us to build complex safety-critical, life-critical, and mission-critical systems. To listen to the complete podcast, AADL and Aerospace, please visithttp://www.sei.cmu.edu/podcasts/podcast_episode.cfm?episodeid=88335&wtPodcast=AADLandAerospace. AADL and Edgewaterfeaturing Serban Gheorghe and Peter Feiler  Serban Gheorghe, vice-president of technology at Edgewater Computing Systems Inc., explained that he first became involved in AADL as Edgewater prepared to deploy a communication product. I got more involved in the technology. What we did is an AADL microkernel, basically with the intent on transforming it into a product. We are still working on that product. We have a few pilot trials in different places. The idea is to take AADL designs and convert them to our kernel and do all the static analysis in the same way and to have a very defined way of producing code, which is certified. So, the certification chain would be much easier to accomplish because it is always a constant concern. AADL provided a means for reducing evidence from a chain of certified components from the model into the kernel into the actual code. The whole value proposition of our product was that—independent of the models on the project where our tools are used and independent of the targets and how the targets are used—the same chain of tools tends to be used. Gheorghe added that Edgewater is working with several international organizations to build a constraints annex, which adds a standard set of analysis tools within the OSATE tool environment, which is open source. But, you know, there will always be more analysis, which is project specific, and constraints, which are project specific, which have to be enforced. What we are providing here is kind of a generic sub-language to be able to express those concerns and constraints. The ability to express those concerns and constraints serves as a catalyst for facilitating the evolution of a component. So, you can now create AADL components and fully characterize them in what you expect to get from them in terms of assumptions and guarantees. Secondly, I think it facilitates this concept of integrating multiple tools with multiple formalisms from the same repository. There are no different assumptions made by different tools. Feiler noted that this ability also enables architects to write specialized constraints for a project without implementing a new tool to enforce those constraints. Work on the constraints annex tries to leverage existing notations. For example, there is a standardized notation called Property Specification Language (PSL) that is used as one basis and then used to look at what elements need to be added to be useful in this context. To listen to the complete podcast, AADL and Edgewater, please visithttp://www.sei.cmu.edu/podcasts/podcast_episode.cfm?episodeid=88335&wtPodcast=AADLandAerospace. AADL and Télécom Paris Tech featuring Etienne Borde and Peter Feiler Etienne Borde, an assistant professor and researcher at Télécom ParisTech, explained that the technical university became interested in AADL because the embedded systems industry is growing at a robust pace, especially in Europe. Borde’s research focuses on software engineering for real-time, embedded systems. We use it as a teaching tool and as a research tool, and both parts are a little bit challenging. The research, of course, is because we have to face new problems. The teaching is because AADL is typically the kind of language that you can use for very advanced software engineering methods. We tried to teach it to students that physically did very little software in the labs. So, there is this kind of gap between what they’re able to understand about software engineering challenges and what AADL is meant to answer. AADL has helped systems engineers understand how software affects their decisions or how their decisions affect software performance. This is typically why we are interested in AADL as well. It’s because it’s a very good language for us, in our opinion, to tackle the issues that you have in safety-critical, embedded systems. So, those [systems] for which you need to have very strong guarantees in the behavior of your software applications. Borde added that his team is mainly using AADL in its research to make the prediction of the software applications or the configuration of the operating systems that will host the software applications. The operating systems in safety-critical, embedded systems have very different characteristics than in standard computer systems. Of course, you can’t accept that your operating system fails the same way that your home operating system could fail…You can’t have delays of tasks. You need to have specific operating systems. Those are quite difficult to configure. You have to be careful in the configuration process of those and the type of guarantees that you can manage by this configuration process. Feiler added that Borde’s team at Télécom Paris Tech is working on code generation capabilities that automatically create the complete execution runtime from the model. This generated code integrates also the functional code written by potentially different suppliers. The individual pieces of application code may be written in either Simulink or another modeling language, Feiler added, but AADL provides the glue that interconnects them. To listen to the complete podcast, AADL and Telecom Paris Tech, please visithttp://www.sei.cmu.edu/podcasts/podcast_episode.cfm?episodeid=77231. AADL and Dassault Aviation Featuring Thierry Cornilleau and Peter Feiler At Dassault Aviation, Thierry Cornilleau contributes to a number of real-time and avionics software studies and projects focused on avionics and aerospace mechanics and aviation. As an engineer working in the avionics domain, Cornilleau is also interested in another standard, ARINC 653, and the connection between AADL and ARINC 653. Feiler noted that Cornilleau’s experience helps the AADL committee define the standard in ways that ensure it meets the needs of the avionics industry. Cornilleau provided inputs and insights to the committee when working on the AADL ARINC653 annex, a document that details how to represent ARINC653 architectures with AADL. Thanks to his contributions, the document has been considered mature enough to be published as an official SAE standard. Dassault is very progressive in its use of formal methods, including AltaRica, a language used for system fault modeling and analysis, which ties in to the error model annex. Cornilleau remarked that two important recent developments with AADL have been with the error modeling annex and the maturity of OSATE, a tooling environment that helps users implement AADL within the Eclipse open-source environment. Cornilleau added that other members of his team at Dassault serve on the ARINC 653 committee. In addition to laying out a partitioned architecture, regular interaction with members of the ARINC 653 committee provided guidance for monitoring the health of the avionics systems, Feiler said and added the following:  With our now more formalized notation, one could get into a discussion with them about how we can provide more formalized guidance on how we can express this health monitoring architecture that they are suggesting to people. So, what we get into is that the AADL Committee cooperates with other committees to put AADL to use in other settings as well. To listen to the complete podcast, AADL and Dassault Aviation, please visithttp://www.sei.cmu.edu/podcasts/podcast_episode.cfm?episodeid=433479. Additional Resources For more information about the Architecture Analysis & Design Language (AADL) please visithttp://www.aadl.info/aadl/currentsite/. To view a recent webinar, Architecture Analysis with AADL, please visithttps://www.webcaster4.com/Webcast/Page/139/5357.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:27pm</span>
Nic Harrison Director of Enabling Digital Delivery My friend and colleague Andrew Besford and I have been talking about tunnelling forwards from today (me) and tunnelling backwards from our 2020 vision (Andrew). In his recent Sprint DWP presentations, Andrew even used a photo of the breakthrough moment of the Channel Tunnel, when British and French tunnellers met under the English Channel in 1990. My Enable team exists in part to make sure all the digital services we are developing today are headed in the right direction to meet up with Andrew’s team's work in defining the way we arrange the business to deliver the services DWP needs in 2020. Enable is focussed on ensuring that the services we deliver as a department are consistent with the strategic aims of DWP (and wider Government). This means making sure all in-flight or proposed changes are examined to make sure they are aligned with our longer term strategy. To do this we have service architects in all major change programmes and a team of business architects who review all changes as they move through existing departmental governance. I have created a new forum called the Service Design Forum (SDF) where all these people meet face-to-face as a community, to understand who is doing what and when. This promotes sharing and re-use whilst reducing duplication and avoiding programme conflicts, without the need for heavyweight documentation and governance; an example of our new nimble ways of working. The SDF is a community made up of service design professionals from all digital projects / programmes supported by subject matter experts from other areas of the business, including Business Transformation Group (BTG) Design and Deliver functions, Operations, Technology, Finance and Commercial. The SDF acts as an expert support group to projects in the discovery and alpha phases of agile delivery. We advise on all aspects of service design including the make up of teams, security design, technology architecture (through our Technology members). We help projects to pool ideas, learn about good design standards (from our library of design and security patterns) and to understand what has already been built elsewhere so that can be re-used rather than built from scratch again and again. We ensure that projects using the agile delivery method are either fully aligned to our strategic vision or are "failed fast" to free up scarce resources and reduce waste. Those projects that move through alpha into beta to become full blown digital programmes are then subject to regular "strategic fit" reviews from the SDF as they pass through the formal PMU governance gates. The SDF also acts as the "knowledge share repository" where we publish all the lessons learned in the SDF, the various design patterns and act as a store of re-usable artefacts, available to all new and in flight work. Business design tells us what DWP needs to look like in the future, and service design turns that vision into services that are actually delivered to users: citizens, other government departments and our colleagues. These designs have user needs at their heart, so one of the functions of the SDF is to ensure all our designs follow the GDS design standards. We are training in-house DWP people on the service design standards, so we can make sure all of our services meet this standard on an ongoing basis during development, and not just at the points when they are assessed externally by GDS. The SDF is still new, we are learning by doing, we have already started to set up subject matter sub-groups to solve specific problems in parallel to the SDF meeting schedule. This is an exciting time to be involved with the transformation of DWP, and the SDF is the heartbeat of our interaction with programmes. I am excited to lead this function, where we will influence daily design decisions and help to shape the future of DWP.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:27pm</span>
By Todd WaitsProject Lead Cyber Security Solutions Directorate This post is the latest installment in a series aimed at helping organizations adopt DevOps. In a computing system, a context switch occurs when an operating system stores the state of an application thread before stopping the thread and restoring the state of a different (previously stopped) thread so its execution can resume. The overhead incurred by a context switch managing the process of storing and restoring state negatively impacts operating system and application performance. This blog post describes how DevOps ameliorates the negative impacts that "context switching" between projects can have on a software engineering team’s performance. In the book Quality Software Management: Systems Thinking, Gerald Weinberg discusses how the concept of context switching applies to an engineering team. From a human workforce perspective, context switching is the process of stopping work in one project and picking it back up after performing a different task on a different project. Just like computing systems, human team members often incur overhead when context switching between multiple projects. Context switching most commonly occurs when team members are assigned to multiple projects. The rationale behind the practice of context switching is that it is logistically simpler to allocate team members across projects than trying to have dedicated resources on each project. It seems reasonable to assume that splitting a person’s effort between two projects yields 50 percent effort on each project. Moreover, if a team member is dedicated to a single project, that team member will be idle if that project is waiting for something to occur, such as completing paperwork, reviews, etc. Using our computing system metaphor, this switching between tasks is similar to the concept of multi-threading, where if one thread blocks the process for some reason, other threads can perform other work, rather than waiting for the first thread to unblock. If all work was assigned only to the first thread, progress is much slower. While multi-threading may be sound reasoning in computing systems, the problem is that human workers don’t always get a nice 50-50 effort distribution. Effort is thus lost to context switching, and productivity may drop precipitously as the worker’s effort is spread across more projects. Per Project Effort Distribution In the above graph based on data from Quality Software Management: Systems Thinking, a team member with one project is able to devote 100 percent of his or her time to that project. A team member with two projects does not yield a perfect 50-50 split. The team member actually yields about 40 percent effort per project because of the amount of time (roughly 20 percent) needed for context switching. In other words, switching between projects requires an operational overhead for the team member to figure out where he or she left off, what needs to be done, how that work fits in the project, etc. Once a team member is assigned five projects, his or her ability to contribute to any given project drops below 10 percent, with 80 percent effort being lost to switching between project contexts. Effort Lost to Context Switching The efforts lost to context switching are not just time, but quality also suffers. In particular, context switching can increase buggy code committed, unavailability of team members, and missed tasks, which require additional effort from team members to repair the problems. Joel Spolsky compares the task switching penalty for computers and computer programmers: The trick here is that when you manage programmers, specifically, task switches take a really, really, really long time. That's because programming is the kind of task where you have to keep a lot of things in your head at once. The more things you remember at once, the more productive you are at programming.  A programmer coding at full throttle is keeping zillions of things in their [sic] head at once: everything from names of variables, data structures, important APIs, the names of utility functions that they wrote and call a lot, even the name of the subdirectory where they store their source code. If you send that programmer to Crete for a three week vacation, they will forget it all. The human brain seems to move it out of short-term RAM and swaps it out onto a backup tape where it takes forever to retrieve. How DevOps Can Help DevOps practices can help guard against some of the pitfalls of context switching, as well as alert the team when context switching is impacting product quality and team productivity. By leveraging continuous integration, any build failure will alert the team members when their contributions are impeding application or feature development. Likewise, automating the assignment of code reviews can insure code committed meets proper style and security standards. Regular communication between team members is critical, especially when balancing work between projects. Without clear, structured avenues of communication, problems associated with context switching will quickly fall through the cracks. Daily stand up meetings and utilizing enterprise communication tools allow team members to quickly identify when context switching may be adversely impacting their ability to deliver business value. Issue trackers help highlight when certain individuals may unknowingly take on too much work across too many projects. Ultimately, limiting the number of projects an individual team member works on is ideal. In those instances when splitting effort is required, however, leveraging DevOps tools and philosophies will help mitigate potential disasters, shift resources as needed, and continue delivering the business value necessary to your success. Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources Hasan Yasar and Aaron Cois will host a webinar, What DevOps is Not, at 1:30 p.m ET on March 11, 2015. To register for the webinar, please click here. To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js. To read all of the blog posts in our series thus far, please visit http://blog.sei.cmu.edu/archives.cfm/category/devops-tips.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:27pm</span>
Just because "everyone" used to do something doesn’t mean it was the right or smart thing to do. That’s one of the good things about humanity - we often (although certainly not always) improve many aspects of our lives as the earth keeps spinning. Consumption of unhealthy diversions such as sugar-loaded cereal, soda and cigarettes is way down. Value Hard to Find If we can get smarter about what we devour, we should be able to do the same with the disdained annual performance review. Thankfully, we are. Perhaps nothing in the HR field is under as much attack these...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:27pm</span>
By Mike KonradPrincipal ResearcherSoftware Solutions Divisions As recent news headlines about Shellshock, Sony, Anthem, and Target have demonstrated, software vulnerabilities are on the rise. The U.S. General Accounting Office in 2013 reported that "operational vulnerabilities have increased 780 percent over the past six years." These vulnerabilities can be hard and expensive to eradicate, especially if introduced during the design phase. One issue is that design defects exist at a deeper architectural level and thus can be hard to find and address. Although coding-related vulnerabilities are preventable and detectable, until recently scant attention has been paid to vulnerabilities arising from requirements and design defects. In 2014, the IEEE Computer Society Center for Secure Design was established to "shift some of the focus in security from finding bugs to identifying common design flaws—all in the hope that software architects can learn from others’ mistakes." "We believe that if organizations design secure systems, which avoid such flaws, they can significantly reduce the number and impact of security breaches," the center states in its report Avoiding the Top 10 Security Design Flaws. On a separate front, a group of researchers from various disciplines within the Carnegie Mellon University Software Engineering Institute recently came together to explore the implications of design-related vulnerabilities and quantify their effects on system cost and quality. This post highlights key issues and findings of our work. Foundations of Our Work According to a report issued by the National Institute of Standards and Technology (NIST), "the cost benefits of finding and addressing defects early are staggering. For every $1 spent on addressing defects during the coding phase of development, it will cost an organization $30 dollars to address if detected in production." The economic consequences of vulnerabilities generally fall into two general types: Harm caused. Breaches are costly and cause loss of security, mission failures, theft of resources (including intellectual property and personal information), and hard-to-recover consumer confidence and trust. Fixing the problem. The time and cost expended to address known vulnerabilities and recover from breaches continues to increase at a pace that is faster than our ability to recruit and develop individuals having the necessary cybersecurity expertise. Growth trends indicate that unless steps are taken to address this issue, there will be a dearth of staff with the skills needed to identify vulnerabilities and deploy needed patches in the future. The SEI team worked to identify root causes in the requirements and design phases of the software development lifecycle. The team included researchers from two separate divisions within the SEI: one that is software engineering and acquisition practice focused (within the Software Solutions Division); the other focused on cyber threats and vulnerability analyses in the operational environment (within the CERT Division). These two disciplines are frequently disconnected from each other during development, which is one of the contributing factors that cause vulnerabilities to be overlooked early in the lifecycle. For example, while software developers typically focus on defects, the operations team homes in on vulnerabilities. The software development side of our team included William Nichols, an expert in the Team Software Process (TSP) and process measurement. Likewise, Julia L. Mullaney, of CERT, is also a TSP expert. We also worked with two vulnerability analysts from CERT: Michael Orlando and Art Manion. Andrew Moore, a researcher in the CERT Insider Threat Center and an expert on system dynamics, also contributed to our effort. The team wanted to highlight sound requirements-gathering and design practices regarding security. Such practices enable software developers to make more-informed decisions early in the software development lifecycle and thereby reduce the level of vulnerabilities released into production where they are much more costly to address. Our research pursued three objectives: gain a better understanding of the state of research on vulnerabilities originating in software requirements and design leverage the extensive data collected by the TSP team indicating where in the lifecycle defects were inserted and what methods and practices were being used develop an economic model demonstrating the impact of vulnerabilities introduced during the requirements and design phases Validating our Premise Early in our research, we reviewed key published literature on predicting security vulnerabilities in software. We focused on research into early indicators of vulnerabilities, such as what is known and when about potential vulnerabilities that might be actionable. We decided to conduct a systematic mapping study, which is a study of all the studies that exist on a topic. Mapping studies typically consist of the following four stages: identify the primary studies that may contain relevant research results conduct a second evaluation to identify the appropriate studies for further evaluation where appropriate, perform a quality assessment (examining for such issues as bias and validity) of the selected studies summarize results along a dimension of interest What we found is that, with few exceptions, there has been little coordinated or sustained effort to study design or requirements-oriented vulnerabilities. As detailed in the SEI technical report on this project, Data Driven Software Assurance: A Research Study, our team of researchers first wanted to validate our premise that there were vulnerabilities that occurred during requirements and design activities (more precisely, during requirements elicitation and analysis; and during architecture, design, and analysis). Our team also wanted to verify that these vulnerabilities were as serious as some of the more common coding-based vulnerabilities and that these had significant economic impact. In 2012, our team of researchers investigated vulnerabilities collected in the CERT vulnerability database, which, at the time, contained more than 40,000 cases. Specifically, we created a heuristic based on recurring keywords to eliminate coding-related vulnerabilities: VulNoteInitialDate is after 01/01/1970 and field Name does not contain overflow and field Name does not contain XSS and field Name does not contain SQL and field Name does not contain default and field Name does not contain cross and field Name does not contain injection and field Name does not contain buffer and field Name does not contain traversal From the resulting vulnerabilities, we next excluded reports of vulnerabilities that lacked sufficient information to determine a cause or had strong indications of implementation-related vulnerabilities. Of those that remained, the team completed an initial root cause analysis on each of the vulnerabilities to confirm that they are, in fact, likely to have been caused by requirements or design defects. From that list, we selected three vulnerabilities on which to conduct a detailed analysis. What follows is a brief analysis of the one of the requirements or design-related vulnerabilities that we identified from the CERT database, Vulnerability Note VU#649219 SYSRET 64-bit operating system privilege escalation vulnerability on Intel CPU hardware. What follows is the original CERT description of the vulnerability and its impact: Description. Some 64-bit operating systems and virtualization software running on INTEL CPU hardware are vulnerable to a local privilege escalation attack. The vulnerability may be exploited for local privilege escalation or a guest-to-host virtual machine escape. A ring3 attacker may be able to specifically craft a stack frame to be executed by ring0 (kernel) after a general protection exception (#GP). The fault will be handled before the stack switch, which means the exception handler will be run at ring0 with an attacker’s chosen RSP, causing a privilege escalation. Impact. This security vulnerability affects 64-bit operating systems or virtual machine hypervisors running on Intel x86-64 CPUs. The vulnerability means that an attacker might be able to execute code at the same privilege level as the operating system or hypervisor. When running a standard operating system, such as Linux or Windows, or a virtual machine hypervisor, such as Xen, a mechanism is needed to rapidly switch back and forth from an application, which runs with limited privileges, to the operating system or hypervisor, which typically has no restrictions. The most commonly used mechanism on the x86-64 platform uses a pair of instructions, SYSCALL and SYSRET. The SYSCALL instruction does the following: •    copies the instruction pointer register (RIP) to the RCX register •    changes the code segment selector to the operating system or hypervisor value A SYSRET instruction does the reverse; that is, it restores the execution context of the application. There is more saving and restoring to be done—of the stack pointer, for example—but that is the responsibility of the operating system or hypervisor. The difficulty arises in part because the x86-64 architecture does not use 64-bit addresses; rather, it uses 48-bit addresses, which gives a 256 terabyte virtual address space that is considerably more than is used today. The processor has 64-bit registers, but a value to be used as an address must be in a canonical form; attempting to use a value not in canonical form results in a general protection (#GP) fault. The implementation of SYSRET in AMD processors effectively changes the privilege level back to the application level before it loads the application RIP. Thus, if a #GP fault occurs because the restored RIP is not in canonical form, the CPU is in application state, so the operating system or hypervisor can handle the fault in the normal way. However, Intel’s implementation effectively restores the RIP first; if the value is not in canonical form, the #GP fault will occur while the CPU is still in the privileged state. A clever attacker could use this to run code with the same privilege level as the operating system. Intel stated that this is not a flaw in its CPU since it works according to its written spec. However, the whole point of the implementation was to be compatible with the architecture as defined originally by AMD. Quoting from Rafal Wojtczuk, "The [proximate] root cause of the vulnerability is: on some 64 bit OS, untrusted ring3 code can force the kernel to execute SYSRET instruction that would return to a non-canonical address. On Intel CPUs, this results in an exception raised while still in Ring0. This exception cannot be handled safely." (Edited to clarify that this is an attribution of a more immediate (or proximate) root cause.) Clearly, many operating system and hypervisor vendors with considerable market presence were affected. Multiple parties could have prevented the vulnerability because Intel’s SDM is very clear on the behavior of SYSRET (and not every x86-64-based operating system or hypervisor was affected). For example, they could have adopted a safer transition back to the application following a SYSCALL. While originally noted and reported by the Linux community back in 2006, the vulnerability was characterized and easily dismissed as a Linux-specific issue. Also from Wojtczuk, "This is likely the reason why developers of other operating systems have not noticed the issue, and they remained exploitable for six years." Intel could also have prevented the vulnerability by not introducing a dangerous re-interpretation of how to return from a rapid system call. Solution The references above coupled with the short time window for  designing, implementing, and releasing a resolution to the vulnerability (from April to June 2012) might give the impression that the software community easily found an alternative, safer way to handle SYSRET (e.g., return other than through SYSRET or check for a canonical address). Implementing a safer method, however, was not so straightforward. That perhaps the same patch/approach might not work for all affected operating systems can be seen in the different ways the vulnerability can be exploited for different operating systems. So, each vendor must conduct its own careful analysis of what computing assets are at risk or can be leveraged for an exploit and carefully redesign/code system calls/returns to ensure safe transition from application to system and back again. Also, the intent of SYSCALL/SYSRET is to reserve these calls for operating system-only tasks but for which execution performance is critical (e.g., by minimizing saving off registers, except for those actually needed by the system function being called). Thus, the operating system-specific patch(es) need to be designed and coded for execution speed as well as safe transition. One of the vendors, Xen, has been particularly revealing relative to the considerable difficulties it encountered in working with select stakeholders to diagnose, design, code, and test patches for VU#649219, including providing a detailed timeline that describes an enormous amount of coordination and analysis behind the scenes, giving rise, no doubt, to enormous frustration. A detailed analysis of the three vulnerabilities is included in the appendices of our report. Developing a Systems Dynamic Economic Model After conducting a detailed analysis of the vulnerabilities, we next leveraged information using our knowledge of Team Software Process (TSP). Created by Watts Humphrey, TSP guides engineering teams that are developing software-intensive products to establish a mature and disciplined engineering practice that produces secure, reliable software in less time and at lower costs.  Our aim in constructing an economic model was to allow people to study systems with many interrelated factors using stocks and flows (dynamic simulation). In creating a simulation model, we first wanted to represent the normal behavior of the system and then change a few assumptions to see how the model's responses change. Creating an economic model using the systems dynamics method, which is detailed in Business Dynamics: Systems Thinking and Modeling for a Complex World by John D. Sternam, enables analysts to model and analyze critical behavior as it evolves over time within socio-technical domains. A key tenet of this method is that the dynamic complexity of critical behavior can be captured by the underlying feedback structure of that behavior. Using Vensim, a dynamic simulation tool, we created a model that represents the design vulnerability lifecycle and includes variables representing key design and defect-related parameters gleaned from the literature search, the detailed vulnerability analysis, and experience with the TSP process. It is important to note that we did not calibrate the model with any one organization’s specific data. To make the most use of the economic model, one would need to calibrate it with an organization’s specific data. This model is not usable (transitionable) as is, except to make a hypothetical argument as to why design practice is important. Wrapping Up and Looking Ahead Our research confirmed that the current ship-then-fix approach to software quality is sub-optimal and in the long term untenable. Our analyses of vulnerabilities included examples in which vulnerabilities could never be fully eradicated from the user community once the product was distributed. Moreover, the system dynamics model that we developed showed that even at the level of a single development increment, the economics often favor earlier attention to security-related requirements and design, as well as ongoing validation. In other words, it is often not necessary to consider longer time scales to experience benefits that exceed the costs, for all major stakeholders. Looking ahead, we would be interested in piloting and calibrating our economic model with an organization that has quality data on its defects including where they originated. If you are an organization interested in piloting this economic model, please send an email to info@sei.cmu.edu. We welcome your feedback on our research in the comments section below.   Additional Resources To read the SEI technical report, Data-Driven Software Assurance: A Research Study, please visit http://resources.sei.cmu.edu/library/asset-view.cfm?assetid=90086. To read the recently-released Avoiding the Top 10 Software Security Design Flaws, published by the IEEE Computer Society Center for Secure Design, please visit  http://cybersecurity.ieee.org/images/files/images/pdf/CybersecurityInitiative-online.pdf. To read the paper, Matching Attack Patterns to Security Vulnerabilities in Software-Intensive System Designs, by Dr. Laurie Williams and Michael Gegick, please visithttp://collaboration.csc.ncsu.edu/laurie/Papers/ICSE_Final_MCG_LW.pdf.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:26pm</span>
By Lori FlynnMember of the Technical StaffCERT Secure Coding Team This blog post was co-authored by Will Klieber. Each software application installed on a mobile smartphone, whether a new app or an update, can introduce new, unintentional vulnerabilities or malicious code. These problems can lead to security challenges for organizations whose staff uses mobile phones for work. In April 2014, we published a blog post highlighting DidFail (Droid Intent Data Flow Analysis for Information Leakage), which is a static analysis tool for Android app sets that addresses data privacy and security issues faced by both individual smartphone users and organizations. This post highlights enhancements made to DidFail in late 2014 and an enterprise-level approach for using the tool. Analyzing Dataflows in Android Application Sets The SEI’s CERT Secure Coding Team has assisted numerous government and nongovernment organizations vet software developed for the Android operating system (OS), which has dominated the mobile device market, but continues to struggle with security problems. The Android OS has become the platform of choice for the Department of Defense (DoD), which recently purchased 7,000 Samsung Galaxy Note II mobile smartphones for the Army’s Nett Warrior System, which aims to arm soldiers with the technology they need to make faster, more accurate decisions during combat. In early 2014, we designed and implemented a novel taint flow analyzer (DidFail) that combines and augments the existing Android dataflow analyses of FlowDroid (which identifies intra-component taint flows) and Epicc (which identifies properties of intents such as the action string) to track both inter-component and intra-component dataflow in a set of Android applications. DidFail enables an organization (e.g., enterprise, app store, or security system provider) to pre-analyze apps, so that the analysis for potential dataflow problems (within the set of apps on the phone) is fast when a user requests to install a new app. Phase 1 of DidFail can be performed on one application at a time and, once completed, does not need to be run again. Phase 2 of DidFail (app set analysis) typically takes only seconds. DidFail addresses a problem often seen in dataflow analysis: the leakage of sensitive information from a sensitive source to a restricted sink. Both "source" and "sink" are commonly used terms in flow analysis. We define source as an external resource (external to the app, not necessarily external to the phone) from which data is read. We define sink as an external resource to which data is written. By analyzing information flow, we can find issues that could affect data integrity and/or privacy. A privacy violation occurs when information flows from a sensitive source to a sink that is not authorized to receive the data. An integrity violation occurs when untrusted data is sent to a sink that is supposed to store only trusted data. The DidFail analysis of a given set of apps takes place in two phases: In the first phase, we determine the dataflows enabled individually by each app and the conditions under which these flows are possible. In the second phase, we build on the results of the first phase to enumerate the potentially dangerous dataflows enabled by the whole set of applications. DidFail differs from pure FlowDroid, which also analyzes flows of tainted information. FlowDroid taintflow analysis is limited to a single component of a single app; DidFail analyzes potentially tainted flows between apps and, within a single app, between multiple components. Recent Enhancements to DidFail In 2014, we worked to improve DidFail in collaboration with graduate students from Carnegie Mellon University’s Information Networking Institute (INI) and Department of Electrical and Computer Engineering (ECE). The graduate students are Will Snavely (INI), Jonathan Burket (ECE), Jonathan Lim (INI), and Wei Shen (INI). Together, we made improvements to DidFail to help it succeed with more applications and detect more flows. First, we developed a new framework for testing the DidFail analyzer, which includes a setup for cloud-based testing and instrumentation to measure the performance of the analyzer. The new setup for cloud-based testing enables us to take advantage of commercially-available powerful virtual machines and to use multiple virtual machines in parallel for faster test completion. Additionally We modified DidFail to use the most current version of FlowDroid and Soot (which includes a better module for converting between Android .dex representation and Soot’s Jimple intermediate representation), increasing its success rate from 18 percent to 68 percent on our test suite of real-world apps. We made enhancements to enable analysis of some dataflows through static fields, BroadcastReceiver components, and Service components. We developed new apps to test the analytical features added to DidFail. Using the improved DidFail analyzer and the cloud-based testing framework, we tested the system on the new test apps and apps from the Google Play store. The SEI technical report Making DidFail Succeed: Enhancing the CERT Static Taint Analyzer for Android App Sets details the new testing framework, enhancements to DidFail, newly developed test apps, and test results. Also, the latest enhancements are available for free download, with instructions for building DidFail from the source code. DidFail Practicality at an Enterprise Level Our initial prototype yielded too many false positives. In particular, DidFail gave too many warnings about potential flows that turned out not be realizable. We are currently improving the precision of DidFail, while also developing a realistic and feasible plan for deploying it at the enterprise level. Deploying DidFail at an Enterprise Level Enterprises could use DidFail to protect their (and their personnel’s) data on Android smartphones. DidFail identifies dataflows from sources to sinks for a set of apps. When implemented in an enterprise, there will be multiple apps, numerous flows, and users unfamiliar with the concept of a dangerous flow. As recommended in the NIST report, Vetting the Security of Mobile Applications, organizations need to create policies concerning permitted and forbidden dataflows. An enterprise could use DidFail in conjunction with a commercial security system to help set and enforce dataflow policies. The enterprise’s subdivisions (e.g., IT, security, and administration) and Android end-users could also provide input useful for protecting the organization’s particular types of sensitive data in its particular use scenarios. Policies could be expressed as white lists or black lists: White list example: A policy might specify that data from the microphone cannot flow to any sink except the public switched telephone network (PSTN). Black list example: A policy might be set to alert the user if an app set could allow data to flow from on-device storage to the internet. As future work, DidFail can be extended to report what user interaction, if any, is required for a flow to occur. For example, consider an app that records audio from the microphone and sends this audio data over the internet. This flow might be considered unacceptable if the app launches when the phone starts up and doesn’t require any user interaction. On the other hand, the flow might be considered acceptable if the app requires the user to press a "record" button before reading from the microphone. When flows are identified that don’t comply with one or more policies, there are a variety of mitigations or steps that can be taken: For personal use, the system can alert the user to the existence of the flow and confirm whether the user still wants to install the app. For use in an enterprise, the system might refuse installation of the app. Alternatively, if the non-compliant flow relies on the existence of multiple installed apps, the system could give the option to remove previously-installed apps to prevent the non-compliant flow. Another option would involve dynamically blocking non-compliant flows, which can be accomplished through a variety of approaches: -The app could be blocked from reading particular sources. -The app could be blocked from writing to particular sinks. -When an app sends an implicit intent, (i.e., an intent that does not expressly designate  its recipient), the system could block that intent from being sent to certain other apps. These dynamic changes would require modifications to the Android OS. Some systems that work within or replace the Android OS, such as CyanogenMod and TaintDroid, can perform those operations. Our DidFail taint flow analysis tool could be incorporated in many application security-checking systems. We hope DidFail will eventually be used widely, including in: public app stores and their integrated security systems such as Google Play Store; corporate enterprise Android security systems; mobile computing security systems; and government app stores and processes, including NIST’s AppVet mobile app vetting system, DHS’s Carwash system for inspecting app security, DARPA Trans Apps secure app store, and the DoD’s planned enterprise Mobile Application Stores (MAS) and conjunct Mobile Device Management (MDM) systems. Enterprises wishing to use smartphones in a safe and effective manner need to institute policies governing their use. Studies such as Android Permissions: User Attention, Comprehension, and Behavior have shown that end users, if presented with a request for permissions, generally approve the permission. The research results discussed in Android Permissions Demystified indicate that often developers request unnecessary permissions and do not understand the true implications of permissions. The Android platform requires users to accept all requested permissions or the app does not install. In contrast, CyanogenMod, which is an open-source Android-based OS, allows the user to grant or deny individual permissions to the app. Similar policies could be used for an enterprise’s iPhone and Android devices at a more abstract level, to prevent dataflows from a specified source to a specified sink (e.g., an organization that prohibits data from the corporate email to be sent out as a text message). Our work focuses on the mechanism rather than the policy, but our analytical mechanism would enable the enterprise to enforce policies it needs or wants. Prior to DidFail, no tool existed that would allow organizations to statically trace tainted dataflows through multiple apps. Beyond using DidFail as a component in a system for setting, analyzing, and enforcing dataflow policiesfor Android smartphones, enterprises may use DidFail analyses for additional purposes. Security researchers can use DidFail to check for dataflows that can be exploited in ways end users would not expect. Developers can also use DidFail to check if the app being developed might accidentally enable dangerous dataflows. Figure 1 shows an example flow of information from the time a user "UserX" asks to install an app, through the enterprise-integrated DidFail app set taint flow analysis and enterprise policy check. In this example, the data flows found were policy-compliant and the app was allowed to be installed. Other Tool Improvements and Looking Ahead "New technologies may offer the promise of productivity gains and new capabilities, but if these new technologies present new risks, the organizations’ IT professionals, users, and business owners should be fully aware of these new risks and develop plans to mitigate them or be fully informed before accepting the consequences of them," Quirolgico et al. wrote in NIST Special Publication (SP) 800-163 Vetting the Security of Mobile Applications. In addition to developing approaches for implementing DidFail at the enterprise level, our team continues to work to improve the precision and soundness of the tool, so that it enables organizations and individuals to have greater awareness of risks associated with adopting Android smartphones and developing plans for mitigating these risks. We are interested in collaborating with organizations to pilot future prototypes. Organizations interested in collaborating with us are invited to contact us. Additional Resources Read the SEI technical report, Making DidFail Succeed: Enhancing the CERT Static Taint Analyzer for Android App Sets, for more detailed information about the work described in this post. To download the DidFail tool, please visit https://www.cert.org/secure-coding/tools/didfail.cfm Read the SEI technical report, Mobile SCALe: Rules and Analysis for Secure Java and Android Coding, for more information about the work described in this blog post. To view the Android wiki on the CERT Secure Coding site, please visit https://www.securecoding.cert.org/confluence/pages/viewpage.action?pageId=111509535.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:25pm</span>
Sommer and Sandra are graduates of the first cross-government Digital Academy. It was held in February this year, shortly before the Digital Academy celebrated its first birthday. Sandra works in the Legal Aid Agency; and Sommer within the NHS Health and Social Care Information Centre. They talk about their experience of the cross-government Digital Academy here. Sandra Berry - Legal Aid Agency I was at CS Live in Newcastle in July 2014 and one of the presentations from DWP was ‘Digital Transformation in Government’. DWP had introduced a Digital Academy to build capability and push the digital transformation. This was like a light-bulb moment for me and my first thought was this is exactly what the Legal Aid Agency needs. I focussed on finding out more about how the Academy was set up, how it worked and how successful this had been. I joined the first cross-government Digital Academy and, although I was nervous and not sure what to expect as I had never worked with the agile methodology, I needn’t have worried. The opportunity surpassed my expectations of what could be achieved in a very short time. I am privileged to have had the opportunity to work with such a fantastic group and extraordinary leaders at the Academy. If government departments are really looking to transform peoples’ lives they need to have an Academy at their departments exactly the same as the one at the DWP. I had an amazing time and have never felt so empowered to look forward to the future and really transform peoples’ lives. Sommer Croft - Health and Social Care Information Centre I had a different background to many of the other Digital Academy cohorts; I’d worked as part of an agile team for 2 years delivering the new backend infrastructure for the NHS, and was a Scrum Master for 5 months. The cross-government Digital Academy was a fantastic experience for me; it consolidated so much of my learning and taught me new tools and techniques which I know I will be able to introduce within my organisation.   Several of my cohorts and I had a debate on whether DWP was the right government department to run the cross-government Digital Academy; and if it should be managed centrally by someone such as GDS. We all came to the conclusion that DWP were excellently placed to run this and establish a cross-government community as all cohorts felt at ease and could be open and honest around issues within their own departments; as well as having discussions on how to resolve these issues. If we’re truly going to transform digital services for citizens in the UK we need every government department to be taking this forward for their services and users.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:25pm</span>
Congrats - You’ve been promoted!  Oftentimes, when you accept a new role at your current company, you will find yourself caught between your old duties and your new duties.  As in any new role, there is likely a defined transition period - typically between 2-3 weeks.  But what happens when your old team comes to you on day 2 of your new role and asks you to take care of something for them?  It’s important to understand when and how to say no to your old team. After you’ve accepted your new...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:25pm</span>
By Tim PalkoSenior Member of the Technical Staff CERT Cyber Security Solutions Division This post is the latest installment in a series aimed at helping organizations adopt DevOps. The workflow of deploying code is almost as old as code itself. There are many use cases associated with the deployment process, including evaluating resource requirements, designing a production system, provisioning and configuring production servers, and pushing code to name a few. In this blog post I focus on a use case for configuring a remote server with the packages and software necessary to execute your code. This use case is supported by many different and competing technologies, such as Chef, Puppet, Fabric, Ansible, Salt, and Foreman, which are just a few of which you are likely to have heard on the path to automation in DevOps. All these technologies have free offerings, leave you with scripts to commit to your repository, and get the job done. This post explores Fabric and Ansible in more depth. To learn more about other infrastructure-as-code solutions, check out Joe Yankel's blog post on Docker or my post on Vagrant. One difference between Fabric and Ansible is that while Fabric will get you results in minutes, Ansible requires a bit more effort to understand. Ansible is generally much more powerful since it provides much deeper and more complex semantics for modeling multi-tier infrastructure, such as those with arrays of web and database hosts. From an operator’s perspective, Fabric has a more literal and basic API and uses Python for authoring, while Ansible consumes YAML and provides a richness in its behavior (which I discuss later in this post). We'll walk through examples of both in this posting. Both Fabric and Ansible employ secure shell (SSH) to do their job in most cases. While Fabric leverages execution of simple command-line statements to target machines over SSH, Ansible pushes modules to remote machines and then executes these modules remotely, similar to Chef. Both tools wrap these commands with semantics for basic tasks such as copying files, restarting servers, and installing packages. The biggest difference between them is in the features and complexity that is presented to the operator. Here is a Fabric script that installs Apache on a remote server: fabfile.py from fabric.api import run,env env.hosts = ['foo.bang.whiz.com'] def install_apache(): run('apt-get install apache2', with_sudo=True) This script is executed with: $ fab install_apache One obvious note here is that we are writing in Python, which provides to the operator all the features of the language. In this Fabric example, we are creating a task: install_apache, calling the run() operation, and literally spelling out the command we want to execute. Fabric handles reading the host name from the environment variable we set. In contrast, here is an Ansible script that does the same thing Fabric did above, using a "playbook" and a "role": hosts foo.bang.whiz.com roles/web/tasks/main.yml name: install Apache apt: name=apache2 state=present site.yml name: install Apache hosts: foo.bang.whiz.com roles: - web This script is executed with: $ ansible-playbook deploy.yml The playbook, and point of entry, is site.yml. This script declares plays, where each play states to what hosts each role should be applied. Each play starts with a name parameter and goes on to declare its targeted hosts and roles to use. The roles themselves are defined by a structure of subfolders containing more YAML that defines what modules to execute with what parameters for that role. In this example, we define a web role containing the apt module. There is a subtle distinction about roles: hosts do not have roles. Instead, hosts are decorated with roles according to the playbook. Also, a playbook can have multiple plays, multiple roles can be applied to a host, roles can have multiple task files, and tasks can have multiple modules. Moreover, we can define groups for hosts and even put those groups into higher level groups. Here is a more complete Ansible example: hosts [webservers] foo01.bang.whiz.com foo02.bang.whiz.com [dbservers] db.bang.whiz.com site.yml name: configure a webserver hosts: webservers roles: - web name: configure a database server hosts: dbservers roles: - db roles/web/tasks/main.yml name: install apache apt: name=apache2 state=present roles/db/tasks/main.yml name: install mysql apt: name=mysql-server state=present All the elements in this example are executed with: $ ansible-playbook site.yml First, notice that we have added more hosts and group them in the hosts file. Second, we've added a second play to the playbook. One nice feature that we don't see by looking at the playbook or the roles is that Ansible will gather information for all of the hosts at runtime and only apply changes necessary to obtain the desired state. In other words, if it ain't broke, don't fix it. Also, note that this is a stripped-down example of Ansible, and does not exemplify its many other features, such as defining and iterating over lists in a module call, using metadata from the hosts such as IP addresses and OS versions dynamically at runtime, and chaining roles together as dependencies. I highly recommend watching the Ansible quick start video here. Now, back to Fabric. Here is roughly the same result using Fabric's tooling: fabfile.py from fabric.api import env,hosts,run,execute env.roledefs['webservers'] = ['foo01.bang.whiz.com', 'foo02.bang.whiz.com'] env.roledefs['dbservers'] = ['db.bang.whiz.com'] @roles('webservers') def install_apache(): run('apt-get install apache2', with_sudo=True) @roles('dbservers') def install_mysql(): run('apt-get install mysql-server', with_sudo=True) def deploy(): execute(install_apache) execute(install_mysql) Note that we are contained to a single file, although the raw size of our configuration in bytes is roughly the same as in Ansible. On a more technical level, Fabric's semantics are much "thinner" than Ansible's. For example, when we target a host with a role in Ansible, we are effectively asking it to check the host for a multitude of data points and evaluate its state before running any commands. Fabric is more of a what-you-see-is-what-you-get implementation, as demonstrated by its API: "run", "put", "reboot", and "cd" are common operations. A consequence of this simplicity is a lack of the rich features that we see in Ansible, such as its ability to pull in host information dynamically and use that information during execution. Here is a simple example of using Ansible’s dynamic host information: roles/web/tasks/main.yml name: install apache apt: name=apache2 state=present name: deploy apache configuration template: src=apache.conf.j2 dest=/etc/apache2/sites-enabled/apache.conf roles/web/templates/apache.conf.j2 &lt;VirtualHost {{ ansible_default_ipv4.address }}:80&gt; ... &lt;/VirtualHost&gt; Here we see a new module being used: "template". By convention, Ansible will look in the role’s "templates" folder for the file supplied to the "src" attribute and deploy it to the location supplied to the "dest" attribute. But the magic here is that prior to application of this role, Ansible gathers a list of what are actually called "facts" from the host and provides that data to us in the scope of our YAML. In this example, it means we can supply our Apache configuration file with the IP address of whatever host to which the role is applied. Getting this kind of behavior with Fabric is work left to the operator. One last topic is how these tools handle authentication. Ansible's answer to this is in the playbook: site.yml - hosts: webservers remote_user: alice sudo: yes # optional sudo_user: bob # optional With Fabric, we simply set the environment variable: fabfile.py from fabric.api import env env.user = 'alice' Both Fabric and Ansible can use your public key, as well to remove the need to enter passwords. This blog posting provided a light introduction to two fairly powerful solutions to the infrastructure-as-code problem. By this point, you may have already decided which direction you want to go, but it's more likely that you have more questions than you started with. There are many features of both Fabric and Ansible that are best left to their respective and official documentation, but hopefully this post helped to you get started. Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js. To read all of the blog posts in our series thus far, please visit http://blog.sei.cmu.edu/archives.cfm/category/devops-tips.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:25pm</span>
Ben Holliday - Head of User Experience Design In my previous Digital Academy blog post, I wrote about how to get started with discovery research. Ideally your team will have been part of this process, getting involved with research and taking part in analysis sessions. For people like product managers or key stakeholders this won’t always happen. In agile teams time is limited. The reality is people won’t have the time to read research reports and they won’t always have the same shared understanding of problems when it comes to planning and prioritisation. Teams need ways of getting hold of the actionable learnings from research. The key to this is ‘insights’. Introducing insights Insights are short statements that give us enough shared understanding to take an action. For example, this is an insight from our State Pension research: "People confuse their current State Pension with the amount they’ll get when they retire" The best ideas, innovations, or product iterations can come from a single insight like this. It’s worth considering that insights will often be the result of frustrations­ seeing people struggle with a product. What makes a good insight Insights should feel simple because they are simple. They should also be provocative - this is what makes them actionable. They should challenge how we think about problems we’re trying to solve - helping us to find new and improved ways of meeting user needs. The best insights are like problem statements. We want our teams to use their experience and skills to solve the design challenges they create. Insights should make space for this to happen. This is where the big and interesting ideas are. Designers will naturally make intuitive leaps. They’ll be able to develop ideas that move a team towards building something that works better for users - solutions that respond to individual insights. User research won’t tell you what to do. You need to work towards solutions using what you’re learning as a guide. Insights aren’t an exact science It’s important to remember that we’re looking for insights, not proof. Design research isn’t scientific. We still need to use our own interpretations of data to get to something that’s actionable. It’s therefore okay if insights turn out to be wrong. We only learn how true our insight statements are by doing more research. Even when we’re not right we’re still making informed design decisions. This is also how we deal with sample size and confidence in our research. Insights are not a full representation of all customers but they help us make design decisions that can benefit everyone. When working with large data sets teams can suffer from analysis paralysis. Problems can be hard to see in complex data. Insights can help cut through and give focus to a team. How to get to insights The data from user research will always be messy and needs analysis. It’s hard to get to good insights so research analysis is very important. A good approach is to affinity sort observations from your research into common themes. See Natalie’s blog post about capturing user feedback to get started. Insights should be written down as part of this research analysis. You’re looking to get to a clear actionable insight for each theme or group of observations. Some insight examples "People don’t know that they can turn on their heating when the weather is cold" This is an example from our Digital Academy project Cold Weather Payments. It’s a good insight that helps us set a design challenge - ­ "how can we let people know they’ve received a cold weather payment?". "People on low incomes don’t know whether they’re employed or self-employed" This example is from our live Carer's Allowance digital service. Again, it’s a good insight. It’s okay that it’s framed as a statement of fact ­- this makes it provocative and actionable. The reality is some people understood if they were self-employed but stating this as a truth makes it easier to set a design challenge that might benefit everyone. For example "how can we capture employment information without people having to self-­determine if they’re employed or self-employed?" Note -­ neither of these examples suggest a solution. Making insights part of your workflow Look for patterns in the insights you find as you iterate a product through different rounds of user research. Eventually you should find that insights are an important part of your workflow. Your team will rely on them in their decision making and planning sessions. They can also be a useful focal point for show & tell sessions with your team and wider stakeholders ­- a great way to communicate what you’re learning.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:24pm</span>
Because the Faragher affirmative defense to illegal harassment grew out of the seeds we planted in my amicus brief for SHRM in the United States Supreme Court, I treat harassment policies and their complaint procedures with extra tender love and care.  If the Human Resource Professional is going to be the gardener who prevents internal harassment complaints from growing into lawsuit weeds, every word in the policy must be chosen with the knowledge that it will be placed under a microscope by your employees, your employees’ attorneys, and possibly the EEOC or the NLRB.  Let’s start...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:23pm</span>
The current Parliament ended on 30 March. Between then and the general election on 7 May is the pre-election period and the Civil Service communicates less, in line with the General Election Guidance. In the 5 weeks up to the election, we’ll only use this blog to provide information essential to continuing the government’s day-to-day work. This applies to @DigitalDWP too.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:23pm</span>
By Mike Gagliardi, Principal Engineer Software Solutions DivisionIn Department of Defense (DoD) programs, cooperation among software and system components is critical. A system of systems (SoS) is used to accomplish a number of missions where cooperation among individual systems is critical to providing (new) capabilities that the systems could not provide. SoS capabilities are a major driver in the architecture of the SoS and selection of constituent systems for the SoS. There are additional critical drivers, however, that must be accounted for in the architecture that significantly impact the behavior of the SoS capabilities, as well as the development and sustainment of the SoS and its constituent systems’ architectures. These additional drivers are the quality attributes, such as performance, availability, scalability, security, usability, testability, safety, training, reusability, interoperability, and maintainability. This blog post, the first in a series, introduces the Mission Thread Workshop (MTW), and describes the role that it plays in assisting SoS programs to elicit and refine end-to-end SoS mission threads augmented with quality attribute considerations. A mission thread is a sequence of end-to-end activities and events presented as a series of steps that accomplish the execution of one or more capabilities that the SoS supports. Simply listing the steps and describing them, however, does not reveal all the important concerns associated with cooperation among the systems to accomplish the mission and their ensuing behaviors. Articulating and understanding the architectural considerations associated with the mission threads are therefore critical to identifying any architecture mismatches between systems and the SoS. Mission threads must be augmented with quality attribute considerations. An MTW uses existing SoS end-to-end mission threads and accompanying architecture plans and augments the mission threads with quality attribute, engineering, and capability considerations with inputs from stakeholders. The MTW identifies architecturally significant SoS challenges that are distilled from the architecture, engineering, and capability issues identified in the post-MTW analysis of the quality attribute augmented mission threads. The MTW also identifies candidate legacy systems that have potential architectural mismatches with the SoS and provides the architecture context from which to perform post-MTW architecture evaluations of the candidate legacy system and software architectures. A mission thread takes place in the context defined by a vignette. The purpose of a vignette is to describe those environmental factors that may be architecturally significant—that is, they impose a constraint on the architecture that would not exist in the absence of these environmental factors. This story can usually be told in a paragraph or two, with an accompanying context diagram showing the nodes containing the systems. The quality attribute augmented mission threads and SoS challenges serve as important inputs to architecture development, architecture evaluation, and constituent system and software architecture evaluations. The MTW is based on the principles of the Software Engineering Institute’s software architecture methods and practices, extended and scaled into the SoS domain. These principles include eliciting stakeholder inputs early in the life cycle articulating and addressing the quality attributes that drive the architecture early in the life cycle identifying challenges impacting architectural decisions early in the life cycle Un-augmented mission threads and vignettes are critical inputs to the MTW and are developed during the preparation phase. A vignette is a short story about the environment that provides the context in which the SoS exists. Mission threads and vignettes are developed by the SoS program’s architect(s) with assistance from the MTW team and are typically based on existing mission threads/vignettes for the SoS and updated for use in the MTW. Sometimes mission threads or vignettes must be developed from scratch if no relevant ones exist. Mission Threads and Operational Vignettes Through our work on a number of SoS and commercial enterprise architectures within the DoD, we have identified three basic types of mission threads: An operational mission thread describes how SoS nodes (and perhaps the systems within the nodes) react to an operational stimulus. The operational mission thread is presented as an end-to-end sequence of steps (external events, operator activities, and automated activities) that take place over a time period. For example, an operational mission thread for a DoD command-and-control system might begin with threat detection followed by a number of steps to determine the intent of the threat, make decisions to counter the threat, apply the counter measures, and finally document the commander’s assessment of damage after completion. A development mission thread focuses on development activities including adding new capabilities, technology refreshment, integration, test, certification, and release. A sustainment mission thread focuses on deployment, installation, sustainment, or maintenance. A sustainment mission thread describes how the SoS nodes operate together to sustain the SoS. Examples The following examples present an operational vignette for ballistic missile defense in a naval context: Two ships (Alpha and Beta) are assigned to air defense to protect a fleet containing two high-value assets. A surveillance aircraft and four unmanned aerial vehicles (UAVs; two pairs) are assigned to the fleet and controlled by the ships. A pair of UAVs flying as a constellation can provide fire-control quality tracks directly to the two ships. A two-pronged attack on the fleet occurs: five aircraft-launched missiles from the southeast three minutes later, seven submarine-launched missiles from the southwest The fleet is protected with no battle damage. An example mission thread (un-augmented) supporting the Ballistic Missile Defense vignette is provided in the following table; it serves as an input to the MTW, and will be augmented with quality attribute, engineering, and capability considerations during the MTW. Conceptual Flow The figure below depicts the conceptual flow of the MTW. SoS drivers and capabilities inform the development of vignettes and mission threads from which a set of quality attributes are derived. Plans for SoS architecture development include an initial set of architecture views that support the vignettes and mission threads. Constituent legacy systems are identified for consideration in the SoS architecture, based on the capabilities and mission threads. The MTW team uses these inputs to augment the mission threads with quality attribute considerations, as well as engineering and capability considerations, with participation from SoS and legacy system stakeholders. The other outputs from the MTW are the SoS challenges derived from the issues identified in the qualitative analysis. The augmented mission threads and challenges will drive important architectural decisions during architecture development. Concluding Remarks Many quality attributes in SoS programs require architectural support at various levels, accompanied by analyses of the various architectural approaches and their trade-offs. Failure to address quality attributes in the architectures can lead to serious consequences, such as operational and developmental failures; addressing these late in the development life cycle increases risks to programs. These risks are exacerbated in SoS architectures comprising legacy system and software architectures. After performing more than 30 MTWs in the context of SoS programs, SEI researchers have observed several trends: First, buy-in from the principals and the stakeholders is critical to the success of the approach, as in all stakeholder-focused methods. To encourage buy-in, the SEI’s MTW teams often developed an initial thread in the stakeholders’ domains for review. Strong third-party facilitation has been critical to the success of the MTW approach. Challenges identified by consensus under independent third-party facilitation tend to benefit from higher group buy-in than do challenges identified by various factions among SoS stakeholders. We also discovered that the architectural challenges identified in MTWs often result in updates to the SoS CONOPS and mission needs statements. Although third-party facilitation is important, motivated programs can adopt the MTW process for their own use. For example, after participating in a few MTWs facilitated by the SEI, a DoD Naval program was able to successfully execute its own MTWs. SoS architecture principles and guidelines were often absent when we began an engagement. Documenting the principles and guidelines for the SoS architecture is beneficial to drive and constrain the architecture development process for both SoS and constituent systems, including legacy modifications. We found that the MTW approach feeds the development of these principles and guidelines. With proper scoping and selection of vignettes and mission threads, programs easily can attain a high percentage of coverage of their SoS’s envisioned capability, which can provide a focused starting point for the architecture development effort. Overall, our experience in executing a number of MTWs for a variety of clients has been very positive. The clients all gained insight into the desired behavior of their SoS and constituent systems; identified architectural challenges, engineering challenges, and capability gaps; and defined architecture guidelines and principles to guide their development. These outcomes help a program avoid the costly consequences of development and operational failures and lead to more cost-effective and on-time delivery of new capabilities for the SoS. This blog post is the first in a series on the MTW. Future posts will explore the steps of the MTW in detail and provide examples, experiences, and lessons learned in its application. Additional Resources To read the SEI technical report, Introduction to the Mission Thread Workshop, please visit, http://resources.sei.cmu.edu/library/asset-view.cfm?assetid=63148. To view the presentation, Identifying Architectural Challenges in System of Systems Architectures, please visithttp://www.acq.osd.mil/se/webinars/2014_07_15-SoSECIE-Gagliardi-brief.pdf. To view the SATURN presentation, Mission Thread Workshop: Lessons Learned, please visithttp://resources.sei.cmu.edu/library/asset-view.cfm?assetid=19898. To view the SATURN presentation, Mission Thread Workshops: Lessons Learned in End-to-End Capability and Quality Attribute Specification for SoS Architecture Development, please visit http://resources.sei.cmu.edu/library/asset-view.cfm?assetid=19890.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:23pm</span>
On the Carer's Allowance Digital Service we've had two business Subject Matter Experts (SME) for the past 12 months, they have been Kathryn Baxendale and myself. Kathryn has been on the service since the very start of discovery working with Government Digital Service, she knows the service and business inside and out. I have been on the service for the past 12 months coming on board to support the introduction of CASA and helping pass the service standard assessment. We have both been working in the Carer’s Allowance Unit for over 9 years, we are the voice in the room for the Carer's Allowance Unit and are empowered to make decisions on behalf of the business. Basically we support the end to end development of a user story throughout it’s journey until it is released into live. It's critical when you are building a service that if it’s replacing or improving an already existing service to understand what's currently out there; how this works and how staff feel this could be improved. Staff speak to users every day, they understand the benefit and the current service better than anyone and they have fantastic ideas to make things better for customers which in turn will makes things better in their own jobs.  Some of what an SME does  Representing the business on all user stories ensuring the business is ready to support these critically needed changes. As the business voice in the room we support the Product Owner and the rest of the team with the prioritisation of user stories. Some stories will achieve business benefits as well as improving the journey for users. It’s key that these are also prioritised alongside other stories and this is where the knowledge of the business also comes in.    We sense check content changes before seeking approval from policy teams, we keep in regular contact with policy to keep them informed on current and future changes. We undertake field assurance testing prior to any release and write guidance and communications for each fortnightly release helping staff understand the latest changes to the service and how these changes will affect their day to day jobs and the user journey. We have iterated our guidance to make sure that it quickly and clearly communicates these changes, we have also worked on making guidance very visible. Crucially we are part of user research where we help Researchers and Content Designers understand some of the complexities of the benefit so they can build workable solutions to problems faced by users. As well as being observers, insight note takers and being part of the evaluations, we support research in any way we can such as playing the role of a web chat agent. User research is a team sport and we are part of that team. Co-location co-location co-location It's been vitally important that we are co-located with the team working with them everyday, this is not something that can be done part time. We have understood the proposed changes and readied the business for them, answering questions from User Researchers, Content Designers, Business Analysts, Developers and Product Owners so they can make informed decisions. We are a part of the constant and continuous conversations. And finally If you haven't currently got a business SME and you are looking to replace, or improve an existing service then I would recommend getting one (or two if you're lucky). 
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:22pm</span>
By Chris Taschner Project Lead CERT Cyber Security Solutions Directive This post is the latest installment in a series aimed at helping organizations adopt DevOps. "Software security" often evokes negative feelings among software developers since this term is associated with additional programming effort and uncertainty. To secure software, developers must follow a lot of guidelines that, while intended to satisfy some regulation or other, can be very restricting and hard to understand. As a result a lot of fear, uncertainty, and doubt can surround software security. This blog posting describes how the Rugged Software movement attempts to combat the toxic environment surrounding software security by shifting the paradigm from following rules and guidelines to creatively determining solutions for tough security problems. Rugged software moves from prescriptive restrictions to a set of DevOps principles. Emphasizing a set of DevOps principles enables developers to learn more about what they are developing and how it can be exploited. Rather than just blindly following the required security measures, developers can understand how to think about making their applications rugged. As a result, they can derive their own creative ways to solve security problems. As part of understanding the challenges associated with secure software development, rugged software developers look more at how their software responds in all kinds of situations. Rather than reacting to new attacks, rugged software should be proactively focused on surviving by providing reliable software with a reduced attack surface that is quick both to deploy and restore. In other words, developers worry less about being hacked and more about preventing predictable attacks and quickly recovering from being hacked. In the past, software security focused on anticipating where and how the attacks would come and putting up barriers to prevent those attacks. However, most attacks—especially sophisticated attacks—can’t be anticipated, which means that fixes are bolted on as new attacks are discovered. The inability to anticipate attacks is why we often see patches coming out in response to new 0-day vulnerabilities. Rugged software developers would rather their software absorb the attacks and continue to function. In other words, it should bend but not break. This shift in thinking from a prevent to a bend-don't-break mindset allows for a lot more flexibility when it comes to dealing with attacks. In a rugged world, things can be dropped because they will survive the fall and bounce right back up. Becoming rugged requires the development team to focus on continuous integration, infrastructure as code, eliminating denial of service (DOS), and limiting the attack surface. Continuous Integration Building software should be automated and repeatable. These attributes can best be achieved through continuous integration, or continually merging source code updates from all developers on the development team. When a software development team is able to reliably, quickly, and easily integrate their code, they will be able to focus on creating robust, rugged code rather than having to worry about integration. In addition, they will be able to rule out integration issues when debugging, decreasing the time and effort required to find and fix bugs. Quickly deploying a fix means quickly patching holes and quickly improving security. Infrastructure as Code Software tests and infrastructure should be easy to validate and not left to the whim of whoever is setting up the infrastructure or performing testing. The whole team should be able to review exactly what is going on when the infrastructure is being set up and the code is being tested. Infrastructure (and testing) as code is an example of a means to make infrastructure easy to validate. Eliminating Denial of Service (DOS) The fact that DOS attacks are often easy to launch and can be expensive to defend against makes their elimination a tricky thing. Quite a few people have talked about eliminating DOS. Symantec has an article, and Rugged Software has a Rugged Implementation Guide. Even though DOS attacks been around for quite a while, they are not abating. Moreover, they still manage to do considerable damage, which means it is important to have a solution to this problem. Limiting the Attack Surface Finally, it is very important to limit the attack surface (i.e., the amount of your application that is exposed to nefarious individuals attempting to exploit weaknesses) of your application. Limiting the amount of your application that is accessible to possible attackers limits the number of attacks that your application will encounter. For more information on how to go about limiting your application's exposure, see SANS and the InfoSec Institute. Concentration on exposing as little of the application to the outside world as possible allows the developer to focus security testing on a smaller piece of the application. Doing this at an early stage in an application’s development means that the exposed part has a greater chance to undergo more testing and become more resilient. Continuous integration, infrastructure as code, eliminating DOS, and limiting attack surfaces while developing a software product will all help to make your product rugged. While this doesn't mean that this product is immune from security problems, it does give it a leg up on other products that aren't rugged due to the added resiliency that these techniques provide. Your product will be more likely to survive and thrive in a dangerous world. Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources On April 9 at 1:30 p.m. ET, Aaron Volkmann and Todd Waits will present the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication. To register for the webinar, please click here. To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js. To read all of the blog posts in our DevOps series, please click here.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:22pm</span>
On May 27, @shrmnextchat chatted with employment attorney Jonathan Segal @jonathan_HR_law about How Workplace Violence Creates HR Nightmares. If you missed this important chat, you can read all the tweets here:   [View the story "#Nextchat RECAP: How Workplace Violence Creates HR Nightmares " on Storify]      ...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:21pm</span>
This week Digital DWP is sponsoring Leeds GovJam 2015. GovJam is a global event. It’s for anyone and everyone with an interest in public services and making them better. We think this event is a good fit for us. I love the idea of hack events like GovJam. They let everyone join in. They teach people that anyone can be a designer. Not only do they encourage close collaboration, they challenge people to get out the building and to get involved. Digital Academy The thing I’ve been most impressed with since joining the Department for Work and Pensions is the Digital Academy. More than 200 of our staff have now graduated from the Academy. They’re learning about user-centred design, Agile delivery, and most importantly they’re joining in. When people graduate from the Academy they get to learn by doing. They work on real products and services. This isn’t always about individual job roles or skills. It’s about working together to understand a problem and deliver better services. Business transformation In DWP, we’re all now part of this business transformation. To make this happen we need to encourage more people in our organisation to put users first. I believe that anyone already thinking about user-centred design needs to be encouraged. We need to get hold of this potential. It’s not something you can manage, but you can give people the freedom and room to grow. Digital delivery is more than a numbers game. We need to invest in everyone who has the potential to think differently. Anyone who’s willing try and put users first. To go see a problem for themselves and to work hard to make things work better. The cost of learning It defeats the opportunity of the Academy to put blockers in the way of people continuing to learn and grow without doing real work. We will make mistakes and we’ll sometimes lack the experience we need. But learning by doing means that this is okay. As I wrote last week, we’re all a little bit scared of getting out of the building and talking to end-users of our services. We need to be brave. Time and resources will get stretched. Finding experienced people to mentor and support people is difficult. But as Tom Loosemoore from the Government Digital Service told us last year at our DWP Sprint event: "Transformation. It’s not complicated. It’s just hard.". What’s not complicated is we start by putting users first. What happens next Not only am I looking forward to seeing what happens this week at GovJam, more so I’m excited to see just where we will be in 12 months’ time.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:21pm</span>
By Greg Shannon Chief Scientist CERT Division In 2014, approximately 1 billion records of personably identifiable information were compromised as a result of cybersecurity vulnerabilities. In the face of this onslaught of compromises, it is important to examine fundamental insecurities that CERT researchers have identified and that readers of the CERT/CC blog have found compelling. This post, the first in a series highlighting CERT resources available to the public including blogs and vulnerability notes, focuses on the CERT/CC blog.  This blog post highlights security vulnerability and network security resources to help organizations in government and industry protect against breaches that compromise data. The most visited posts on the CERT/CC blog center around a critical area of research: SSL Certificates as a core foundation of trust transmissions on the Internet along with certificates. These posts explore weaknesses in those trust relationships as implemented in mobile platforms and also highlight tools that have been created at CERT to explore those vulnerabilities. Before we take a deeper dive into SSL Certificates, let's take a look at the top 10 posts (as measured by number of visits) on the CERT blogs: SSL Tools The three most popular posts on the CERT blogs were written by CERT researcher Will Dormann and stemmed from his analysis of network traffic man-in-the-middle (MITM) techniques (HTTP and HTTPS) by using MITM tools/techniques. While there are plenty of MITM proxies, such as ZAP, Burp, Fiddler, mitmproxy, and others, Dormann wanted a transparent network-layer proxy, rather than an application-layer one. After a bit of trial-and-error investigation, he found a software combination that works well for this purpose, CERT Tapioca. Dormann wrote The Risks of SSL Inspection, the most popular post on the CERT blogs, after he started examining the SuperFish and PrivDog vulnerabilities and realized that the SSL inspection techniques used by those two applications are similar to trusted enterprise software that performs SSL inspection. When he started looking into SSL inspection software in general, he realized that many of them are making mistakes that put clients at risk. Here is an excerpt: Recently, SuperFish and PrivDog have received some attention because of the risks that they both introduced to customers because of implementation flaws. Looking closer into these types of applications with my trusty CERT Tapioca VM at hand, I’ve come to realize a few things.In this blog post, I will explain The capabilities of SSL and TLS are not well understood by many. SSL inspection is much more widespread than I suspected. Many applications that perform SSL inspection have flaws that put users at increased risk. Even if SSL inspection were performed at least as well as the browsers do, the risk introduced to users is not zero. The complete post, The Risks of SSL Inspection, can be read here. The CERT/CC post that Dormann wrote that introduced CERT Tapioca, is the second most visited post on the CERT/CC blog in the last six months. In that post, Dormann introduces CERT and takes readers through a test run using the tool. Here is an excerpt: Once you start performing MITM testing of HTTPS traffic, you hopefully will have a better idea of the level of trust that you should be giving HTTPS. Let’s consider the situation where you type https:// and the domain name in your web browser’s address bar for the site that you want to visit. If you don’t get a certificate warning, what does that mean? It really just means that the certificate provided by the server was issued by any one of the root CAs trusted by your browser. For example, the current version of Mozilla Firefox comes pre-loaded with over 90 trusted root CAs. Any one of these sites may provide a dozen or more individual trusted root CA certificates. If you are using a system that you don’t manage, and you’re relying on HTTPS to keep your web traffic from prying eyes, you may want to think twice. Just like we can silently intercept HTTPS traffic by having the mitmproxy root CA certificate installed, you must assume that enterprises with managed desktop systems are employing similar techniques to monitor traffic. It is their network, after all. But it is also worth noting the impact the compromise of a single root CA can have. It’s happened in the past with DigiNotar and Comodo. Without getting too sidetracked here, Moxie Marlinspike’s blog entry SSL And The Future Of Authenticity goes into a good amount of detail about the problems with SSL and trust. The complete post, Announcing CERT Tapioca for MITM Analysis, can be read here. In Finding Android SSL Vulnerabilities with CERT Tapioca, a follow-up to the post introducing CERT Tapioca, Dormann introduces the use of CERT Tapioca in the automated discovery of SSL vulnerabilities in Android applications. Here is an excerpt: As mentioned in my previous blog post, one of the uses of CERT Tapioca is discovering applications that fail to properly validate SSL certificates for HTTPS connections. As a proof-of-concept experiment, I took an Android phone and loaded some apps onto it. By bridging the "inside" network adapter of Tapioca to a wireless access point, I was able to create a WiFi hotspot that would automatically attempt to perform MITM attacks on any associated client. Using a physical phone worked fine, and I was able to easily and quickly test a handful of apps. The problem is that this sort of testing environment doesn’t scale. The Google Play store currently has about 1.3 million applications, of which about 1 million are free. If it takes me 60 seconds to test each application, and if I’ve done my math correctly, it would take me a bit over 8 years to test each free Android application, assuming that I put in a 40 hour week for 52 weeks a year. While it was fun to test a handful of applications, I’m pretty sure that I would get bored before I made it through all of them. And I’d like to think that there are more valuable uses of my time. Automation to the Rescue Computers are great for performing tedious, boring work. Why not let them do the work? So how can we automate testing Android applications? First, I started with the Android Emulator that comes with the Android SDK. I installed it in a Linux virtual machine and created an Android virtual device. Because ARM Android is emulated rather than virtualized, it's very slow. So after the Android Virtual Device (AVD) completely powered up, I took a snapshot of the powered-on Linux virtual machine that it was running in. I also had an instance of CERT Tapioca providing network connectivity to the Android Emulator VM. The inside network adapter of Tapioca was connected to the same virtual network as the adapter for the Android Emulator VM. With that done, I wanted to control the AVD as well as the Linux OS running it. The complete post, Finding Android SSL Vulnerabilities with CERT Tapioca, can be read here. The remainder of this SEI blog post highlights the most visited posts on the CERT/CC blog in the area of vulnerability discovery. Vulnerability Discovery By paying greater attention to the early phases of the development lifecycle, CERT researchers aim to change the nature of the software development process to detect and eliminate—and later avoid—vulnerabilities before products are released. We work to achieve this goal by placing knowledge, techniques, and tools in the hands of software vendors to help them understand how vulnerabilities are created and discovered so that they can learn to avoid them. CERT researchers have developed a suite of tools to help vendors make more secure software. As the post below illustrates, researchers also try to help improve the public’s understanding of security concepts. In the post Differences Between ASLR on Windows and Linux, Dormann explains how one of the major exploit mitigations (ASLR) is different on the Windows platform vs. Linux. Here is an excerpt from the post: A program or library that is linked with the /DYNAMICBASE option will be compatible with ASLR on Windows. According to the Windows ISV Software Security Defenses document: In general, ASLR has no performance impact. In some scenarios, there’s a slight performance improvement on 32-bit systems. However, it is possible that degradation could occur in highly congested systems with many images that are loaded at random locations. The performance impact of ASLR is difficult to quantify because the quantity and size of the images need to be taken into account. The performance impact of heap and stack randomization is negligible. For this reason, there really is no reason to link anything without the /DYNAMICBASE option, which enables ASLR. With /DYNAMICBASE enabled, a module’s load address is randomized, which means that it cannot easily be used in Return Oriented Programming (ROP) attacks. When it comes to Windows applications, we recommend that all vendors use both DEP and ASLR, as well as the other mitigations outlined in the Windows ISV Software Security Defenses document. If vendors have not elected to use /DYNAMICBASE, users have the ability to force ASLR through the use of Microsoft EMET. The complete post Differences Between ASLR on Windows and Linux, can be read here. In the post Vulnerability Coordination and Concurrency Modeling, CERT researcher Allen Householder highlights recent work in the area of cybersecurity information sharing and the ways it can succeed or fail. In the introduction, Householder explains that in vulnerability discovery and cybersecurity information sharing work, he often learns the most by examining the failures in part because the successes are often just cases that could have failed, but didn’t.Here is an excerpt from the post: One of the first things you notice when you start thinking about vulnerability coordination is that there are more ways for it to go wrong than there are for it to go right. But we’ll get to that. It all starts with a vulnerability (vul). Let’s leave aside how that vul got there. We don’t really care. It’s simply a given for our model. Oh, right, we haven’t talked about models yet. Well, in this post I’m using Petri nets to demonstrate the coordination process. Petri nets are a way of modeling systems that operate with concurrency, and concurrency is often mentioned as one of the most challenging aspects of modern system engineering. If you’ve never seen a Petri net before, here is a quick introduction: Petri nets model distributed processes as a network of nodes and arcs. Nodes can be either places (circles), or transitions (boxes). Arcs (arrows) connect places to transitions and vice versa. Places can’t connect to places, and transitions can’t connect to transitions. Places can hold tokens, which mark the state of a process. Transitions represent events that change the state of the process. A transition can fire when all the places immediately upstream of it are occupied by tokens (i.e., when it is enabled). When a transition fires, it consumes tokens from its inputs and places tokens in its outputs. The complete post, Vulnerability Coordination and Concurrency Modeling, can be read here. In the final post in the vulnerability research area, Vulnerabilities and Attack Vectors, Dormann describes a few of the more interesting cases he has encountered through his research in examining attack vectors as a source for potential vulnerabilities. The post was published in 2013 and remains among the most popular on the CERT/CC blog. Here is an excerpt: Attack vector analysis is an important part of vulnerability analysis. For example, reading an email message with Microsoft Outlook can be used as an attack vector for the Microsoft Jet Engine stack buffer overflow (VU#936529). With the Microsoft Windows animated cursor stack buffer overflow (VU#191609), reading an email message with Microsoft Outlook Express in plain text mode can also be used as an attack vector. With both cases, it’s the analysis of the different attack vectors, not the underlying vulnerabilities that improve our understanding of the severity. Below are some recent examples where attack vector analysis took an important role. The complete post, Vulnerabilities and Attack Vectors, can be read here. CERT Vulnerability Notes In addition to the blog, another valuable public resource is the CERT Vulnerability Notes Database, which provides timely information about software vulnerabilities. Vulnerability notes include summaries, technical details, remediation information, and lists of affected vendors. Many vulnerability notes are the result of private coordination and disclosure efforts. Industry organizations cite vulnerability notices as part of technical notifications to customers and users. Visitors to the Vulnerability Notes Database can search for specific information, such as the 10 most recently updated vulnerabilities a list of vulnerabilities that affect control systems a list of vulnerabilities discovered using the Basic Fuzzing Framework (BFF) CERT researchers also provide an archive of all public vulnerability information from the database. Looking Ahead The past six months have been an important time for the CERT/CC Blog in terms of keeping our stakeholders informed and helping them protect themselves against ever-present cyber threats. The next blog post in this series will highlight the most popular post on the CERT Insider Threat blog, which aims to help organizations protect against insider threat. As always, we welcome your ideas for future posts and your feedback on those already published. Please leave feedback in the comments section below. Additional Resources For more information about CERT Tapioca or to download the tool, please visitwww.cert.org/vulnerability-analysis/tools/cert-tapioca.cfm.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:20pm</span>
One of my clients is the CEO of a small business that was doing very well. The business had been around for 20 years and had grown to a modest level in that time. At one point she felt the growth had become too stagnant and she felt she needed to make some changes in order to take the business to the next level. The problem was that those changes were bound to anger some of her staff. And she was a people pleaser. Luckily for her she had not been forced...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:19pm</span>
Displaying 29377 - 29400 of 43689 total records