Loader bar Loading...

Type Name, Speaker's Name, Speaker's Company, Sponsor Name, or Slide Title and Press Enter

"Perfect", "Can’t be improved", "No further improvement needed" - reading through customer feedback for the Carers Allowance Digital Service it strikes me how many of our users think that there’s nothing more to do. We can pack up and go home.  Au revoir. Sayonara. Farewell! Far be it from me to suggest these particular users are wrong, it’s just that they’re not quite right on this one. Excellent, yes. Perfect, no. And it’s not just our users. We achieved Live Accreditation Status in November 2014 and for many people this translated as ‘job done’. Erm…no! We haven’t ‘gone live’ in the old fashioned sense. The interpretation of go-live that applies to traditional projects is ‘go-dead’ in the context of agile. We ‘go live’ every fortnight. When we went Live there was a backlog of changes and features still waiting to be prioritised. We didn’t get to live and stop, we got to live and started doing more. Since November we've made over 20 releases covering around 300 user stories. We haven’t done that for the sake of it, it’s in response to the needs of our users. These needs drive everything we deliver, they don’t stop, they change and evolve. We’re here to ensure that the service meets those needs. As the service matures we've put even more focus into our analytics to identify barriers within the user journey; why are we losing so many people at the disclaimer? Why are so many people calling to find out whether we have received their claim? Why do people spend so long at certain questions? This analysis feeds the research, the research feeds the backlog and it all contributes to continuously improving the service for users. Here’s just a few of the many improvements delivered since live, we've: moved the Disclaimer to the front of the service so users know what they are signing up to before they start - completion rates have rocketed from 60% to over 80%. Simpler for users. introduced email notifications to let users know that we've received their claim resulting in reduced calls to our contact centres. Clearer for users. reviewed and clarified all the questions - completion times have tumbled below 25 minutes. Faster for users. It doesn't stop with research. We’re trialling new technologies, pushing new infrastructure and security features that will enable smoother delivery of the Departments other digital initiatives. This means more and more services will be delivered this way. Services that will work better for users. With a satisfaction score consistently at 90% and being told we’re perfect, we’re in a happy place but this journey is far from complete. We’ll continue to do more until the service no longer needs to meet the need of its users - and that doesn't feel like any time soon.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:19pm</span>
By Greg ShannonChief ScientistCERT Division For two consecutive years, organizations reported that insider crimes caused comparable damage (34 percent) to external attacks (31 percent), according to a recent cybercrime report co-sponsored by the CERT Division at the Carnegie Mellon University Software Engineering Institute. Despite this near parity, media reports of attacks often focus on external attacks and their aftermath, yet an attack can be equally or even more devastating when carried out from within an organization. Insider threats are influenced by a combination of technical, behavioral, and organizational issues and must be addressed by policies, procedures, and technologies. Researchers at the CERT Insider Threat Center define insider threat as actions by an individual who meets the following criteria: a current or former employee, contractor, or business partner who has or has had authorized access to an organization’s network, system, or data and intentionally exceeded or intentionally used that access in a manner that negatively affected the confidentiality, integrity, or availability of the organization’s information or information systems. Insider threats are influenced by a combination of technical, behavioral, and organizational issues that organizations must address through policies, procedures, and technologies. Insider threats are influenced by a combination of technical, behavioral, and organizational issues and must be addressed by policies, procedures, and technologies. Researchers at the The CERT Insider Threat Center provides analysis and solutions to organizations through partnerships with the U.S. Department of Defense, the U.S. Department of Homeland Security, the U.S. Secret Service, other federal agencies, the intelligence community, private industry, academia, and the vendor community. This blog post, the second in a series, introduces the CERT Insider Threat Center blog, which highlights the latest research and security solutions to help organizations protect against insider threat. Before we take a deeper dive into the most visited insider threat posts of the last six months, let's take a look at the top 10 posts (as measured by number of visits) on both CERT blog (CERT/CC and Insider Threat): The most visited posts on the CERT/CC blog center around a critical area of research: SSL Certificates as a core foundation of trust transmissions on the Internet along with certificates. These posts explore weaknesses in those trust relationships as implemented in mobile platforms and also highlight tools that have been created at CERT to explore those vulnerabilities. Before we take a deeper dive into SSL Certificates, let's take a look at the top 10 posts (as measured by number of visits) on the CERT blogs: Although some of these posts are several years old, their continued popularity demonstrates the ongoing relevance of work by researchers at the CERT Insider Threat Center. Insider Threat Statistics During presentations, assessments, or while instructing courses, our researchers are often asked about the state of insider threat. "Just how bad is it?" is a question often heard.  Capturing accurate data on insider threat proves difficult, however, as organizations are often loathe to report incidents and risk negative press or damage to standing. These repeated requests became the catalyst for the post Interesting Insider Threat Statistics which has been the most popular post on the CERT Insider threat blog in the six months ending in March 2015. This blog post presents statistics as well as the cost that organizations encounter as a result of an insider threat incident. Here is an excerpt from the post: According to the 2010 CyberSecurity Watch Survey, sponsored by CSO Magazine, the United States Secret Service (USSS), CERT, and Deloitte, the mean monetary value of losses due to cybercrime was $394,700 among the organizations that experienced a security event. Note that this figure accounts for all types of security incidents, including both insiders and outsiders. What is especially concerning is that 67 percent of respondents stated that insider breaches are more costly than outsider breaches. This dollar figure does not fully account for the damages caused by insiders, though. For instance, activities such as website defacement and exposure of private email correspondence may not involve expensive remediation, but they would still cause a great deal of harm to the victim organization. How valuable is your reputation? How much does your website represent you? If you are an e-commerce company that assures its customers that they will have secure transactions, imagine the damage to your business if your website gets compromised. Another common question we often receive is, "How many insider attacks take place annually?" This is a much more difficult question to answer. Consider that in the same survey, among 523 respondents, 51% of those who experienced a security incident also experienced an insider attack. The problem with approximating a total number of insider attacks is that, in our experience, a large number of these attacks go unreported. In fact, according to the survey, "the public may not be aware of the number of incidents because almost three-quarters (72%), on average, of the insider incidents are handled internally without legal action or the involvement of law enforcement." There are a variety of reasons why companies choose not to report insider cases; in particular, lack of evidence to prosecute, damage levels that were insufficient to warrant prosecution, inability to identify the perpetrator, and fear of public embarrassment. However, even this does not tell the full story. Based on our research and collaboration with other industry leaders, we believe that most insider crimes go unreported not because they are handled internally, but because they are never discovered in the first place. The complete post Interesting Insider Threat Statistics can be read here.  Insider Threat and Physical Security of Organizations In our database of incidents involving malicious insider activity—these include crimes of IT sabotage, theft of intellectual property, and fraud—about 8 percent involve physical security issues. Physical access to an organization's secure areas, equipment, or materials containing sensitive data may make it easier for a malicious insider to commit a crime. Therefore, an organization’s physical security controls are often just as important as its technical security controls. The post Insider Threat and Physical Security of Organizations provides some case studies of physical security issues as well as some physical security controls. Here is an excerpt from the post: In our case repository of incidents of malicious insider activity, including crimes of IT sabotage, theft of intellectual property, and fraud, about 8 percent involve physical security issues of concern. The case summaries below outline a few of these cases that we’ve analyzed. For more than a year, a contract janitor stole customer account and personally identifiable information from hard-copy documents at a major U.S. bank. The janitor and two co-conspirators used this information to steal the identities of more than 250 people. They were able to open credit cards and then submit online change-of-address requests so the victims would not receive bank statements or other notifications of fraudulent activity. The insiders drained customers’ accounts, and the loss to the organization exceeded $200,000. A contract programmer tricked a janitor into unlocking another employee’s office after hours. He switched the door’s name plate and requested that the janitor let him into "his" office. The programmer, who had already obtained employment with a competitor, was able to download sensitive source code onto removable media. A hospital security guard accessed and stole personally identifiable information regarding the organization’s patients. The guard and three co-conspirators opened fraudulent cell phone plans and credit card accounts. As part of the scheme, they changed the account addresses of the victims so the bills would never reach the account owners. After being caught, the insider was ordered to pay $18,000 for the crime. A communications director showed an expired ID badge to a security guard to gain unauthorized access to a data backup facility. Once inside, the director unplugged security cameras and stole backup tapes containing records for up to 80,000 employees. A contract security guard used a key to obtain physical access to a hospital’s heating, ventilating, and air conditioning (HVAC) computer and another workstation. The guard used password-cracking software to obtain access and install malicious software on the machines. The incident could have affected temperature-sensitive patients, drugs, and supplies. An insider stole an organization’s trade-secret drawings that were marked for destruction and sold them to a competing organization. The victim organization estimated its losses at $100 million. The competing organization that received the stolen documents was forced to declare bankruptcy after a lawsuit. We have also observed the following physical security issues in the case data: Infiltration/exfiltration of physical property: activities such as bringing removable media in and out of a facility Improper termination of an employee’s physical access or access badge Unauthorized access to facility: employees entering facilities during unusual hours or unauthorized employees walking through an open door behind an authorized employee (known as "piggybacking") Generally poor physical security: general issues such as insufficient guard oversight or insufficient separation of duties for physical access controls Employee used an unauthorized workstation: employees who are able to physically enter another employee’s office/workspace and access their workstation Breaking and entering/physical destruction: employees breaking into secure spaces or stealing physical equipment Janitorial staff issues: janitorial staff who steal sensitive information or are socially engineered into violating physical security Improper disposal or destruction of organization information The complete post Insider Threat and Physical Security of Organizations can be read here. Theft of Intellectual Property and Tips for Prevention One of the most damaging ways an insider can compromise an organization is by stealing its intellectual property (IP). An organization cannot underestimate the value of its secrets, product plans, and customer lists. In the publication An Analysis of Technical Observations in Insider Theft of Intellectual Property Cases, CERT Insider Threat researchers took a critical look at the technical aspects of cases in which insiders stole IP from their organization. Insiders commit these crimes for various reasons, such as to benefit another entity, to gain a competitive business advantage, to start a competing organization or firm, or to gain personal financial benefit. By understanding the specific technical methods that insiders use to steal information, organizations can consider gaps in their network implementation and can identify ways to improve controls that protect their IP. Technical discussions of IP theft are helpful for operational staff to understand how insiders can compromise their organization. Additionally, organizations should always attempt to better understand the human behavioral elements of insider crimes. The report A Preliminary Model of Insider Theft of Intellectual Property details two preliminary models of behavior associated with insider theft of IP. The third most visited post on the CERT Insider Threat blog Theft of Intellectual Property and Tips for Prevention presents highlights of the research in both reports. Here is an excerpt from the post: Our study indicated that the most common method of physical exfiltration of data was removable media. Prior to 2005, the most common removable medium was writable CD. However, recent incidents indicate that removable USB mass storage devices like thumb drives and external hard disks are now more popular. USB devices have a much greater storage capacity than CDs, which makes it easier for insider to move their entire desired data set at once. What can organizations do about these problems? First, they can always consider the role of best practices and established standards in defending against insider attacks. Insider attacks frequently exploit policies or controls that are covered in accepted best practices for IT system security. Second, organizations should always consider more than just the technical aspects of the crime. In a recent report Deriving Candidate Technical Controls and Indicators of Insider Attack from Socio-Technical Models and Data, we examined the importance of creating technical indicators for behavioral actions so that we can gain a more complete understanding of how to defend against insider crimes.Organizations should pay specific attention to these technical vulnerabilities while they attempt to understand what controls are practical to put in place for removable media in the organization. If removable media is necessary to keep operations moving, an organization may want to establish technical measures to limit which machines allow use of removable media, take an inventory of authorized media, and implement some measure of physical security to prevent removal or introduction of new uninventoried devices from the facility. When considering network security, organizations should attempt to identify suspicious email communications (particularly with attachments) to direct competitors, foreign governments, or other illegitimate recipients of corporate mail. Organizations should consider using a log aggregation and indexing tool to look for patterns in behavior that might warrant further investigation. This is especially true during major organizational events that may cause stress among employees, such as mergers, downsizing, acquisitions, or reorganizations. These events could possibly influence employee behavior in a negative way, and a heightened awareness of security might be necessary. The complete post Theft of Intellectual Property and Tips for Prevention can be read here.  Theft of Intellectual Property by InsidersThe CERT insider threat database was started in 2001 and contains insider threat cases that can be categorized into one of four groupings: fraud sabotage theft of intellectual property miscellaneous The post Theft of Intellectual Property by Insiders presents cases in our database that involve the theft of IP. As of the date of this post (December 18, 2013), 103 insider threat cases in the database included the theft of IP. (All statistics are reported as a percentage of the cases that had relevant information available.)Here is an excerpt from the post: Insider theft of IP occurred most frequently in the information technology (35 percent of cases), banking and finance (13 percent), and chemical (12 percent) industry sectors. (The industry sector was known in 101 of the 103 cases.) The majority of insider IP theft incidents occurred onsite.  (The attack location was known in 78 of the 103 cases.)   Trusted business partners accounted for over 17 percent of attackers and former employees accounted for 21 percent. (Employment status was known in 100 of the 103 cases.) Over 30 percent of insider theft of IP cases were detected by non-technical means, while fewer than 6% cases were detected by a software solution. The financial impact of these attacks is substantial. The impact was over $1,000,000 USD in 48 percent of cases and over $100,000 in 71 percent of insider theft of IP cases.  (Financial impact was known in 35 of the 103 cases.) For additional information and more in-depth analysis of the insider threat cases involving the theft of IP with foreign beneficiaries, please see our report Spotlight On: Insider Theft of Intellectual Property Inside the United States Involving Foreign Governments or Organizations. In addition to the theft of intellectual property, the CERT Insider Threat Center has conducted studies of other insider threat cases, including insider fraud in the U.S. financial services sector and potential patterns of insider threat cases involving sabotage. The complete post Theft of Intellectual Property by Insiders can be read here. Looking Ahead: Helping Organizations Establish an Insider Threat Program This has been an important year for the insider threat blog in terms of keeping our stakeholders informed and helping them protect themselves against ever-present cyber threats. Now that we have looked at the top posts, I would like to make you aware of a new series that was recently launched on the Insider Threat blog. Earlier this year, researchers at the CERT Insider Threat Center launched a series of blog posts aimed at helping organizations establish an insider threat program. This series is intended to help organizations affected by Executive Order 13587, Structural Reforms to Improve the Security of Classified Networks and the Responsible Sharing and Safeguarding of Classified Information, to establish a program for deterring, detecting, and mitigating insider threats. This executive order affects organizations that work within the U.S. federal government and that operate or access classified computer networks. The first post by Randy Trzeciak, technical manager of the Insider Threat Center, was published early in March 2015 and outlines planned posts for the series. Here is an excerpt from that post: Because of a number of high-profile incidents that have significantly impacted organizations recently (e.g., sabotage, theft of information, fraud, national-security espionage), many organizations across government, industry, and academia have recognized the need to build an insider threat program (InTP) to protect their critical assets. Over the course of the next few months, we will be discussing the following topics as part of our blog series:    Introduction to the CERT Insider Threat Center    Components of an Insider Threat Program    Requirements for a Formal Program    Organization-Wide Participation    Oversight of Program Compliance and Effectiveness    Integration with Enterprise Risk Management    Prevention, Detection, and Response Infrastructure    Insider Threat Training and Awareness    Confidential Reporting Procedures and Mechanisms    Insider Threat Practices Related to Trusted Business Partners    Data Collection and Analysis Tools, Techniques, and Practices    Insider Incident Response Plan    Communication of Insider Threat Events    Policies, Procedures, and Practices to Support the Insider Threat Program    Protection of Employee Civil Liberties and Privacy Rights    Defining the Insider Threat Framework    Developing an Implementation Plan    Conclusion and Resources In this series we will describe the key elements of an effective insider threat program. We will begin by examining the need to build a program. The complete post, InTP Series: Establishing an Insider Threat Program (Part 1 of 18), can be read here. As always, we welcome your ideas for future posts and your feedback on those already published. Please leave feedback in the comments section below. Additional Resources: For more information about the CERT Insider Threat Center, please visithttp://www.cert.org/insider-threat/. To view the CERT Insider Threat Blog, please visithttp://www.cert.org/blogs/insider-threat/.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:19pm</span>
By Chris TaschnerProject Lead CERT Cyber Security Solutions Directive This post is the latest installment in a series aimed at helping organizations adopt DevOps. Tools used in DevOps environments such as continuous integration and continuous deployment speed up the process of pushing code to production. Often this means continuous deployment cycles that could result in multiple deployments per day. Traditional security testing, which often requires manually running multiple tests in different tools, does not keep pace with this rapid schedule. This blog post introduces a tool called Gauntlt, which attempts to remedy this issue. The idea behind Gauntlt is to make writing rugged code easier by allowing developers to turn security testing into code. This approach, in turn, simplifies the integration of security testing into the deployment and testing processes. Gauntlt provides adapters for curl, nmap, sslyze, and garmr and also features a generic command line adapter to run any command line tool. One of the greatest changes to software engineering that DevOps presents is getting developers and operations together collaborating during the entire systems development lifecycle (SDLC). Adding security testing into the SDLC will help to reduce some problems with security testing. Often security testing is done after the fact and then solutions are bolted on. Security testers, developers, and operations staff often work in silos, which reduces communication. This isolated work environment also means that when security flaws are found in software or infrastructures, disputes will rise. Because security testing is typically done so late in the process, it can become costly to add required security solutions. In addition, flaws might be missed when testing is done on a large code base. Gauntlt allows development and operations teams to collaborate with security testers and work side-by-side throughout the SDLC by enabling the performance of security testing via test scripts, which can be incorporated into the continuous integration process. This approach helps move security testing left, from the end of the process to the start. Pushing security testing left improves security of the application The idea of having a single, readable test script lets developers and operations staff contribute and inspect the tests being performed. This collaboration helps prevent possible miscommunication about what should and shouldn’t be allowed. Using this methodology, security testing engineers won't design tests with assumptions that developers aren’t aware of, and developers will learn security practices and become better equipped to build in security up front. In addition, Gauntlt enhances the reproducibility of security testing. Individual security testers can run any of the tools mentioned above and interpret the output and be effective. But, if the security testers are having an off day, or forget to run a particular test, they can easily miss important security issues. Gauntlt turns security testing into code and makes security testing easily repeatable by performing all security testing at the push of a button. Launching Gauntlt for the first time requires a quick three-step process that’s described at gauntlt.org. Once Gauntlt is installed, attacks can be written utilizing the tools provided. "Attack" is the terminology that Gauntlt uses to mean security test. Gauntlt provides some example attacks written for each of the adapters it handles. The attack files are written using the Gherkin syntax as defined by Cucumber. Here is a sample attack that attempts to detect robots.txt files on a webserver. The @slow tag tells Gauntlt to allow for a 30-second timeout, @veryslow allows for a five-minute timeout. These timeouts prevent tools from holding up testing while allowing for flexibility for tests that might take longer to complete normally. @slow Feature: nmap robots attackBackground:  Given "nmap" is installed  And the following profile:    | name     | value      |    | hostname | www.cert.org |Scenario: Detects robots.txt files on this host.  When I launch an "nmap" attack with:    """    nmap --script http-robots.txt &lt;hostname&gt;    """  Then the output should contain:    """    | http-robots.txt:    """ Overall, Gauntlt provides a powerful framework that can be used as a foundation for performing directed tests on a web project. With this tool users can begin to further automate their testing environment and simplify the interpretation of testing results. For more information on the syntax and how to utilize this tool see Hands On Gauntlt by James Wickett. The site includes a free sample that will help users get started. Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication Aaron Volkmann and Todd Waits please click here. To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here. To listen to the podcast DevOps—Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here. To read all of the blog posts in our DevOps series, please click here.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:19pm</span>
I am one of six people who were recently selected for the first ever DWP Junior WebOps training program. The program was specifically created for DWP by external IT training provider QA Training with input from the DWP WebOps team. It runs for 12 weeks and covers a wide range of topics and skill areas that are essential to creating and maintaining user-centred digital services at DWP. The six of us are a mix of ages and backgrounds. Everyone on the program has come from a different part of DWP but we all have a common interest in WebOps and all share a desire to learn. What is WebOps? WebOps was initially described to me by colleagues as "the dark arts" that keep the wheels turning at DWP Technology. A few weeks into the training program, I’d sum up WebOps as the deployment, operation, maintenance, tuning and repair of web-based services. This means that once a service has been developed and delivered into the live environment by our product teams, it’s up to WebOps to keep it functioning as designed. The Junior WebOps Program As we develop new digital products and services at DWP, so the WebOps team needs to grow to support them. The Junior WebOps training program is one way of meeting that need by developing skilled staff in house (we’re also recruiting for experienced WebOps professionals right now). The program is divided into three blocks - I’ve just completed the first block. The learning experience The first day was like attending a new school. Getting up at 5.30am to get to Leeds for 9am wasn’t pleasant, but I was eager (and a bit nervous) for the massive journey that lay ahead. I’d had a telephone call with the others on the program the week before starting, but it was still exciting getting to know each other on that first day, and embarking on a 12-week journey together into the unknown. The first week focused on agile and lean. These are methods of working and thinking that are now well embedded at DWP when it comes to building digital services. Being from an ITIL & PRINCE2 background, it was extremely interesting and new to me, but I definitely saw the value in it and we started applying some of the agile principles immediately (more on that later). By the end of Week 1 we had formed a great team (Lego and an impromptu evening out with the WebOps team and apprentices might have played a part!) and were set for the rest of Block 1. Core WebOps Technologies The next few weeks focused on the core technologies used in WebOps. First up was Linux in Week 2 - a whole new operating system for most of us. The course was excellent, although even with a technical background I found it intense and challenging at times. We were told before we started that this program wasn’t going to be easy and that it would require serious commitment - Week 2 really brought this home. Week 3 began with a couple of revision days - some time to gather thoughts and collate what had been learnt to date - and finished with some structured networking with experienced WebOps colleagues and staff from other teams. This turned out to be the calm before the storm. Week 4 ramped up the intensity even further as we took on the LAMP stack (Linux, Apache, MySQL and PHP) - the backbone of the WebOps world. We had got the basics of working with Linux down in Week 2 so it was good to break those skills out again, but it was still an incredibly challenging few days learning two new software packages and a programming language. Incorporating agile One of the best things about the program is that we weren’t just learning abstract concepts - we started using the things we were learning immediately, particularly when it came to agile. Once we’d got the basic agile learning down, we began each day with a stand-up meeting where we could discuss any issues we’d had the day before and agree what we would do that day. Not only did the stand-ups help us get to know each other better, they were also a chance to help each other out with anything we were struggling with and ensure everyone was progressing evenly - really useful given our different backgrounds and knowledge levels. Each week also finished with a retrospective where we advised both the WebOps team and the external training provider on our experiences of the course - what went well, what didn’t and any recommendations for improvements. This is important to the program as we are the first iteration and our feedback will help improve and shape the next version of the course. My personal retrospective on Block 1 is that it has been tough but enjoyable, if you like to learn! I can’t wait to get stuck into the next Block.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:19pm</span>
In about a month, individuals will be heading to Las Vegas to attend the 2015 SHRM Annual Conference.  This will be my 15th SHRM Annual Conference, and, based on my years of experience, here are the things you do NOT want to do while attending. 1.  Do NOT avoid drinking water Its the desert, people.  Every day will likely be 100 degrees and it will be a dry heat, so you won't even feel like you're sweating.  But, given the significant amount of walking you're likely to do as well as the...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:19pm</span>
By Carol Woody Technical Manager of the Cybersecurity Engineering Team CERT Division This post was co-authored by Bill Nichols. Mitre’s Top 25 Most Dangerous Software Errors is a list that details quality problems, as well as security problems. This list aims to help software developers "prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped." These vulnerabilities often result in software that does not function as intended, presenting an opportunity for attackers to compromise a system. This blog post highlights our research in examining techniques used for addressing software defects in general and how those can be applied to improve security detection and management. The errors on Mitre’s Top 25 list are both quality problems and potential security problems. Software security generally shares many of the same challenges as software quality and reliability. Consider Heartbleed, a vulnerability in the open source implementation of the secure socket layer (SSL) protocol. At the time Heartbleed was discovered, available software assurance tools were not set up to detect this vulnerability. As we discuss later in this post, however, Heartbleed, could have been found through thorough code inspection. Assurance and Finding Defects We define software assurance as demonstrating that software functions as intended and only as intended. Vulnerabilities permit unintended use of software that violates security and are therefore defects.  Unfortunately, all software begins with defects, and we have no means to prove that any software is totally free of defects or known vulnerabilities. This ambiguity means that there will always be a risk that software will not function as intended at later times. Likewise, this risk led us to explore whether techniques used for addressing defects in general can be used to improve security defect detection and management. We began by reviewing detailed size, defect, and process data for more than 100 software development projects amassed through the SEI’s Team Software Process (TSP) work. The projects include a wide range of application domains, including medical devices, banking systems, and U.S. federal legacy system replacement. This data supports potential benchmarks for ranges of quality performance metrics (e.g., defect injection rates, removal rates, and test yields) that establish a context for determining `very high quality products. We have the following types of information available for each project: summary data that includes project duration, development team size, cost (effort) variance, schedule variance, and defects found; detailed data that includes (planned and actual) for each project component: size (added and modified lines of code [LOC]), effort by development phase, defects injected and removed in each development phase, and date of lifecycle phase completion. We analyzed this data to identify baselines for expected numbers of defects of various types and the effectiveness of defect removal techniques. Before we discuss our findings, it is important to note that injecting defects is really an inevitable byproduct of software development. Developers make defects as a natural byproduct of building and evolving software. Removing these defects, however, proves more difficult. It is hard to eliminate all defects from software. The data we analyzed indicated that defects similar to many known security vulnerabilities were injected during the implementation phase. A few of these software products (five) not only had low levels of defects when released, but also were used in domains that required safety or security. The additional testing or analysis performed on these projects gave us a measured sample of escaped safety or security defects. We found there is no silver bullet for addressing defects or security vulnerabilities. Among the cases we examined that proved most effective, however, was the application of standard quality techniques, such as documented designs, review of the designs against requirements, code inspections (all performed prior to testing). The projects most effective at defect removal did not rely solely upon testing or static analysis to discover defects. Testing and tools were used as a verification of completeness. In cases where early removal techniques had not been effectively applied, developers were often overwhelmed with responses from their static tools. Conversely, the projects that were applying strong early quality assurance techniques received substantially fewer warnings when using tools, making the follow up easier to manage. The five projects selected from the SEI’s TSP database demonstrated that producing products with few safety or security operational issues requires an integration of quality reviews for defect removal with security or safety-critical reviews. Examinations were done for both code quality and security/safety considerations at each review point in the lifecycle beginning with early design. As detailed in our technical note, Predicting Software Assurance Using Quality and Reliability Measures, which I co-authored along with Robert Ellison and Bill Nichols, assuring that a software component has few defects also depends on assuring our capability to find those defects. Positive results from security testing and static code analysis are often provided as evidence that security vulnerabilities have been reduced. Recent "goto fail" and Heartbleed vulnerabilities demonstrate, however, that it is a mistake to rely on these approaches as the primary means for identifying defects. As we learned by examining these two vulnerabilities, the omission of quality practices, such as inspections, can lead to defects that can exceed the capabilities of existing code analysis tools. More information on each of these cases is included below. Case Study: "goto fail" Vulnerability In 2014, Apple fixed a critical security vulnerability that was likely caused by the careless use of "cut and paste" during editing. The programmer embedded a duplicate line of code that caused the software to bypass a block of code that verifies the authenticity of access credentials. Researchers discovered this security flaw in iPhones and iPads; Apple confirmed that it also appeared in notebook and desktop machines using the Mac OS X operating system. The vulnerability is described in the National Vulnerability Database as follows: Impact: An attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS. Description: Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps. This vulnerability allowed attackers to use invalid credentials to gain access to any information on the targeted device such as email, financial data, and access credentials to other systems and devices. A variety of standard quality techniques, such as a personal review by the developer or a more formal peer review, should have identified the defect for removal. A number of structured development techniques, if applied consistently, could also have identified and possibly prevented implementation of the security coding defect that led to the vulnerability, including designing code to minimize branching and make it more predictable and testable architecting the design specification to become the basis for verifying code, including but not limited to, requirements analysis, and test While these techniques are excellent strategic recommendations to improve quality in general, they cannot prevent careless mistakes. The same caveat would apply to recommendations such as (1) provide better training for the coders, (2) use line-of-code coverage test cases, or (3) use path-coverage test cases. Using static analysis to identify dead code could have flagged this defect, but not all such mistakes result in truly dead code. It is better to find and remove these problems during a personal review, an informal peer code review, or a formal peer code inspection. Case Study: Heartbleed Vulnerability The Heartbleed vulnerability occurred in the OpenSSL "assert" function, which is the initiator of a heartbeat protocol to verify that the OpenSSL server is live. OpenSSL is an open-source implementation of the secure socket layer (SSL) and transport layer security (TLS) protocols used for securing web communications. The assert function sends a request with two parameters, a content string (payload) and an integer value that represents the length of the payload it is sending. If the OpenSSL connection is available the expected response is a return of the content string for the length specified. The protocol assumes that the requested length of the payload returned is less than 65,535 and less than or equal to the payload length, but those assumptions are never verified by the responding function. A consequence of a violation of either of these limitations is that the request can trigger a data leak. Rather than a buffer overflow, this leak is what is called an over-read. The security risk is that the additional data retrieved from the server’s memory could contain passwords, user identification information, and other confidential information. The defect appears to have been accidentally introduced by an update in December 2011. OpenSSL is a widely used and free tool. At the disclosure of Heartbleed, approximately 500,000 of the internet’s secure web servers certified by trusted authorities were believed to be vulnerable to the attack. The new OpenSSL version repaired this vulnerability by including a bounds check to ensure that the payload length specified by a developer is not longer than the data that is actually sent. Unfortunately, that check is only the start of an implemented correction because elimination of the vulnerability requires the 500,000 users of this software to upgrade to the new version. In addition, because this problem is related to security certificates, protecting systems from attacks that exploit the Heartbleed vulnerability requires that companies revoke old SSL certificates, generate new keys, and issue new certificates.IEEE’s Security and Privacy article Heartbleed 101 provides an excellent summary of why this vulnerability was not found sooner, even with the use of static analysis tools. The designer of each static analysis tool has to make trade-offs among the time required for the analysis, the expert help required to support the tool’s analysis, and the completeness of the analysis. Most static analysis tools use heuristics to identify likely vulnerabilities and to allow completion of their analysis within useful times. The article goes on to explain that while static analysis tools can be effective for finding some types of vulnerabilities, the complexity of OpenSSL (including multiple levels of indirection and other issues) exceeded the capabilities of existing tools to find this type of vulnerability. While the OpenSSL program is complex, the cause of the vulnerability is simple. The software never verified the design assumption that the length of the content to be returned to the caller was less than or equal to the length of the payload sent. Verifying that the input data meets its specifications is a standard activity performed for quality, not just for security. The associated software errors that led to the "goto fail" and Heartbleed vulnerabilities should have been identified during development. These two examples highlight the fact that quality practices, such as inspections and reviews of engineering decisions, are essential for security. For example, testing and code analyzers, must be augmented by disciplined quality approaches. Improve Security with Quality Our research suggests that implementing systems with effective operational security requires incorporating both quality and security considerations throughout the lifecycle. Predicting effective operational security requires quality evidence and security expert analysis at each step in the lifecycle. If defects are measured, it follows that from 1 percent to 5 percent of them should be considered vulnerabilities. Likewise, when security vulnerabilities are measured, then code quality can be estimated by considering them to be 1 percent to 5 percent of expected defects. Further evaluation of systems is needed to see if the patterns suggested by our analysis continue to hold. We explored several options to expand our sample size, but found limited data about defects and vulnerabilities assembled in a form that could be readily analyzed. This analysis must be done for each unique version of a software product. At this time, evaluation of each software product requires careful review of each software change and reported vulnerability. The review not only matches up defects with source code versions, but also reviews each vulnerability reported against the product suite to identify defects specific to the selected product version by parsing available description information and identifying operational results for the same source code. Collection of data about each product in a form that supports automation of this analysis would greatly speed confirmation. We welcome your feedback on our research. Please leave comments below. Additional Resources To read our technical report, Predicting Software Assurance Using Quality and Reliability Measures, please click here.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:19pm</span>
Tom Simcox - Content Designer Last week I was lucky enough to attend Leeds GovJam - part of the Global GovJam taking place in over 30 cities around the world. It brought together people interested in service design from across the public sector. It was my first GovJam, and unlike anything I’ve experienced before. Doing not talking As a content designer and civil servant, I think we can spend too long talking, justifying or defending our position and ideas. It can get in the way of delivery. At GovJam, things are different. Each GovJam event around the world creates products based on the same secret theme. From the moment it was unveiled, jammers were thinking about problems, quickly building solutions and testing with real people within hours. Building teams and exploring the theme The theme was surprisingly vague. But the simple image of what appeared to be a lock allowed us to explore ideas and be creative, without constraints. We voted on the ideas that interested us and formed teams. My team wanted to explore ways of helping people break out of, or ’unlock’ poverty. Getting out of the building - discovering the problem We hit the streets of Leeds. We talked to people about their attitudes towards food, the choices they made, and about barriers to eating healthily. We’d thought that access to cheap, fresh, nutritious ingredients was holding people back from making better choices. But our research gave us a better understanding. We discovered that people needed to be shown an alternative. Not just told to make better choices. As a result, the idea of the ‘Cook Truck’ was born. The service would take food education into the community. It would show and teach people an alternative. Removing the fear of failing - build, learn, repeat We built a prototype. We didn’t sit down to debate. We didn't talk ourselves out of anything, or question whether it would work. We weren’t afraid of failing. We worked fearlessly and creatively. We made use of whatever we could lay our hands on (Lego, plasticine, lots of cardboard) and just built it. And when we’d built it we didn’t stop there. We showed it. We watched others experience it (we acted out each other’s personas) and we learned from it. And then we improved it. Designing in the open - sharing our prototypes Teams shared ideas with other teams and this collaboration was really important. We even joined the Athens GovJam on Skype to see what they’d been doing. At the end of the jam we published our prototypes on the Global GovJam website. Anyone can look at these and build on the work we’ve started. At the final show and tell, we received a surprise visit from Tom Riordan, CEO of Leeds City Council. Tom spoke about the need to open up service design in the public sector. Budget cuts mean councils need to work smarter, and they need our help. Before I left the GovJam I was thrilled to learn that the council had been looking at an idea similar to the Cook Truck. They’d heard about what we’d designed and were keen to talk to us! What I’m taking back to DWP I was amazed at how much we created in just 48 hours. GovJam is all about trying new ways of designing and collaborating with others. We did it while having lots of fun! But we did more than that. We discovered unmet needs of the citizens of Leeds. We started designing public services to meet them. We did it by being fearless, by going out and talking to people to find out how we could make their lives better. I’m new to my design role, so for me, GovJam was the perfect opportunity to learn by doing. I experienced agile design at its best, and saw what can be done by people passionate about change. If we put users at the heart of what we’re building, aren’t afraid to fail, and share what we’re doing, we can make a real difference. (Thanks to Lisa Jeffrey for use of some of her photos - visit Flickr to see more of Lisa’s images from Leeds GovJam)
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:19pm</span>
By William WoodSenior Member of the Technical StaffSoftware Solutions Division Legacy systems represent a massive operations and maintenance (O&M) expense. According to a recent study, 75 percent of North American and European enterprise information technology (IT) budgets are expended on ongoing O&M, leaving a mere 25 percent for new investments. Another study found nearly three quarters of the U.S. federal IT budget is spent supporting legacy systems. For decades, the Department of Defense (DoD) has been attempting to modernize about 2,200 business systems, which are supported by billions of dollars in annual expenditures that are intended to support business functions and operations. Many of these legacy systems were built decades ago using technologies available at the time and have been operating successfully for many years. Unfortunately, these systems were built with components that are becoming obsolete and have accompanying high-licensing costs for commercial off-the-shelf (COTS) components, awkward user interfaces, and business processes that evolved based on expediency rather than optimality. In addition, new software engineers familiar with current technology are unfamiliar with the domain, and documentation is scarce and outdated. Other problematic factors include business rules that are embedded in code written in obsolete languages using obsolete data structures and the fact that the cadre of aging domain experts maintaining legacy systems are unfamiliar with newer technologies. This blog post provides a case study of a modernization effort conducted for a federal agency by SEI researchers on such a large-scale, legacy IT system.  The Horseshoe Model A general approach to modernization is based on the horseshoe model shown in the figure below.  The basic principle of the horseshoe model is that the transformations are more costly at the higher levels. Merely changing technologies (e.g., moving from one commercial data base management system (DBMS) to another) is done at the technical level, and there are many commercial tools to assist in this transformation. If there are significant changes to business processes, roles, and user interfaces, the modernization will take a greater effort, as measured in terms of cost, performance, and risk. Recently a number of SEI engineers were tasked to plan a proposed modernization of a large-scale, legacy IT system (main frame, hierarchical database, COBOL, job control language [JCL], green screens, spaghetti code) to a modern architectural framework (commodity hardware, Linux OS, JAVA, relational database, well defined framework for distributed processing, 4-layer applications). The modern architectural framework was defined by a technical reference architecture (TRA) and a common platform infrastructure (CPI) satisfying the TRA. The TRA included many constraints on the development, operation, and support for both the data structures and the applications. The CPI included the computing device, software language and associated tools, operating system, relational database and associated tools and services, web services, and other services. The CPI would also be used by other modernization projects within the federal agency. We defined a plan with four phases but were involved directly with only the first two phases. The first phase consisted of only SEI engineers. An SEI engineer led the second phase, which included five customer representatives. The phases are described in detail below: First Phase. The team conducted an analysis of a number (14) of responses to a request for information (RFI) by commercial IT contractors to migrate the legacy software to the modern framework. This migration involved making as few changes as possible to get it to the new TRA/CPI; move it to the CPI in an infrastructure minimally satisfying the TRA; use the CPI, including a specific COTS relational database with associated tools; leave the COBOL and green screens; and make as few changes as possible to the spaghetti code, legacy applications, and data structures’ architecture. This approach was designated as a "lift and shift" (L&S). The RFI mentioned six issues to be addressed.  a. migration from hierarchical database to a relational database b. migration of application code c. integration and testing of the modernized system d. maintenance of the application code e. acquisition and contracting constraints f. past performance These responses form the top six rows in the table shown below.  The team also considered whether the proposals included sufficient technical detail to resolve these six issues and these considerations formed the next six rows of the table. We included two more issues that form the bottom two rows.  Each proposal also included an expected cost and schedule. The table below shows how the proposals (A through N) were judged to have addressed the issues. The results for each response are shown in the adjacent table. The green color indicates that an issue was addressed in a satisfactory manner, yellow that it was addressed somewhat, and red that it was not addressed.  The cost and schedule numbers were not included in the table, but the costs varied by a factor of two. The schedules were all about the same. The proposers A, B, and C are almost completely "green" indicating that they both understood the issues and knew how to tackle them technically. The proposers F, G, H, and I clearly understood the issues, but did not describe the technology they would use to resolve them in sufficient detail. The proposals L, M, and N more or less indicated "we are good at this sort of thing" but failed to address the issues and gave no technical detail.  The proposers D, E, J, and K were rather vague on the issues and technology. The analysis convinced the team that most responders considered a lift and shift to be feasible, so we proceeded on that basis. The team recommended that an effort be started immediately to better understand the structure of the legacy software and data structures and how the different classes of users interacted with these structures. We called this process discovery and analysis (D&A) and it consisted of performing the following activities: review the documentation (it was sparse and dated); analyze the code and data structures for connectivity relationships using commercial tools, such as Lattix; determine how business processes and user roles accessed the code and data structures; and collect the real-time traffic patterns already being measured on the system. In addition, the programmers knew that there were existing issues with the system, such as dead code, dead tables, dead attributes, redundant copies of attributes, and cloned and owned procedures. These issues can also be identified with the same tools. Analysis of the relationships between software elements (data files, programs) can then be used to form clusters of data files, programs, and users with minimal interactions. These clusters would then form the basis for a phased lift and shift effort. Other important considerations that require management involvement included the following  Can the authoritative data be split between the legacy system and the modern system? Is there a need to failback to the legacy system when a failure occurs in the modern system?  How much of the transformation can be accomplished using tools, and how much manual intervention is required? It was recommended that a small task to be initiated to transform one of the clusters. In addition, some constraints must be placed on any solution including ensuring that the most critical users are not inconvenienced by the requirement that they log in to and switch between multiple systems to accomplish their work avoiding transactions that will require atomicity across the legacy and modern systems However, a D&A could not be initiated without performing a return on investment (ROI) analysis for the whole migration. This ROI analysis was done by a customer expert together with SEI input. Second Phase. Based on the RFI responses (but not the D&A results, which had not yet been conducted) the team proposed six alternative approaches ranging from doing nothing (as a baseline) to completely re-engineering the system. The team viewed the lift and shift as a transformation at the technical architectural level, a tool-based transformation of the applications and replacement of green screens as occurring at an applications architectural level, and a re-engineering effort as being at the business architecture level.  The team developed a set of 23 factors to compare between alternatives. These factors can be found in the Additional Resources sectionat the end of this blog. The factors were initially based on the team’s architectural knowledge and experience and the knowledge of the software engineers on the team tasked with sustaining the legacy system. The factors were then reviewed against those from the OMB Exhibit 300, which the Office of Management and Budget (OMB) uses (along with other factors) to analyze proposed and ongoing programs. This comparison led to the introduction of a few more factors. The factors were grouped by cost, performance, and risk. We developed a simple set of measures (1, 3, 5 and 8), which were applied to each factor with 1 being worst and 8 being best. Because the factors were unevenly split among the groups, we also normalized the measures over the groups. The ranking of the alternatives is shown in the graphic below. As the chart above indicates, higher numbers were the best options The decision makers decided on a hybrid approach i. Conduct a D&Aii. Lift and shiftiii. Transformiv. Re-engineer In addition, the team needed to meet with users to discover which business processes were cumbersome and time consuming and should be re-engineered. Third Phase. Build an end-state architecture on top of the modern architectural framework, defining applications as services, and data structures and mapping them to legacy applications and data files transformation method to be used legacy COTS tools and the CPI COTS tools (developmental, test, and operational). The external interfaces In addition, the end-state architecture must account for use case sequences and business case sequences. The end-state architecture must be given a thorough review by stakeholders, and a registry of technical risks must be established based on the evaluation and subsequent upgrades and decisions. Fourth Phase. Build a multi-phase architectural roadmap that allows deviations from the framework constraints at the intermediate phases. This roadmap should start with some pilot projects to better understand the difficulties in all approaches: lift & shift, transform and re-engineer. The phasing should take into account the following: the progress in the development of the CPI and its impact on the migration choosing low-hanging fruit early organizational boundaries that will impact the development splitting of authoritative data process to be followed: move data first, move code first, move a grouping of code/data efficacy of tool support Each phase of the target architecture must have views showing the following:  which legacy elements are to be replaced and a mapping of where they are to be introduced in this phase what end-state elements are to be introduced cross coupling between systems additions and removals of interfaces between the legacy system and the PTA how each class of user will interact with the mixed system As progress is made on a phase, especially the first or pilot phase, more and more information will become available to modify the roadmap, both within the current phase being implemented and the next phase. For example if a new CPI capability is unstable, and this impacts the move of a particular application, then re-schedule the move until the CPI stabilizes.  Wrapping Up and Looking Ahead A basis for modernizing the system was determined, and criteria were created for choosing between alternative approaches. A way of clustering the software elements was described, and the basis for a target architecture was defined. The factors to be used in a modernization effort will depend somewhat on the attributes of the legacy system the attributes of the TRA and CPI business drivers of the program office funding the migration.  The factors developed for this migration effort can serve as a basis for developing factors for another migration effort. Building a roadmap, however, relies on choosing the order of moving the clusters, and the milestones when some class of users will be cutover from the legacy system to the modernized one. Building a roadmap is a complex optimization problem, in which many issues must be resolved and decisions made at different levels:  For example, whether or not to have authoritative data split between the two systems is a high-level decision made at the program management level.  If the split is allowed, then at specific milestones some users can start using the modern system, while others still use the legacy system, demonstrating progress against a schedule. If the split is not allowed, then progress can only be measured by progress against tests conducted at milestones.  The efficacy of tool support can only be determined by using the tools to migrate a portion of a cluster and gaining tool familiarity; then writing a lessons-learned report to provide guidelines to later engineers on how best to use the tools. The decisions made to order the migration of clusters is an optimization problem with a large search space. First, there is the problem of defining the utility function to minimize, expressing the problem mathematically and defining the feasible solution space where all hard constraints are satisfied. A good example of a hard constraint might be that critical users conducting critical capabilities must always use either the legacy system or the modernized system. Some of these issues are being addressed in a new research effort underway at the SEI.  We welcome your feedback on this research. Please leave comments below.  Additional Resources To view the presentation Architectural Insights into Planning a Legacy Systems Migration, which I co-presented with Michael Gagliardi and Philip Bianco at the 2014 TSP Symposium, please click here. 
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:18pm</span>
It’s always interesting to hear how the people management profession differs worldwide. Workplace laws, business culture and social mores vary country by country. What works in one nation, industry or even company, may not work in another because when it comes to managing people—the most complex but critical aspect of business—there is no one-size-fits all approach. Still, some keys to HR’s success are universal. This was our thinking at SHRM when we developed the SHRM Competency Model. Through extensive global research, we set out to codify...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:18pm</span>
The Secure Communications team recently passed their GDS Alpha review to move into Beta. Product Owner Rachel Woods reflects on the review and offers some insider knowledge on the experience. Rachel Woods - Product Owner There's a phrase often used in the legal world: "if you come to equity, come with clean hands". As a law student I used to imagine standing in court, the high Victorian dark wood judicial bench afore me with a stony-faced judge peering over his half-moon glasses and me all squeaky-voiced and squirming with red-raw, much washed hands, explaining, "They're sparkling, m'lud." Fast forward more than ten years later and this was the first image that sprang to mind when someone uttered the words ‘Digital by Default Alpha Review’. As is usually the case, the fear is of the unknown. I want to share some of my experience in the hope I can prevent you from having similar traumatic flashbacks! It's not scary So firstly, it's not something to fear. Not if you know your project. The panel want to talk to someone who has responsibility and accountability for and within the project - they don't want a briefed figurehead. I've been on the Secure Comms project from the beginning, living and breathing our user needs with the rest of the team. I have the pleasure (and the burden) of understanding the history of the project, what decisions I've made and how they’ve moulded and shaped the project into what it is today. I know exactly why things are the way they are and I can explain that. Don't underestimate the importance of that. It's a conversation One of the things that worked really well at out our review was our focus on showing the thing. I spent about ten minutes giving an overview of the project, just enough to explain what our overarching user need was and how our service would meet it. It really was quick - for the actual description of the service I used our elevator pitch. Why tell them about it when they are about to see it in action? Then, in the best double act since Torville and Dean, our Lead Developer, Kevin Adams, and User Researcher, Chris Beardsell, walked the panel through the prototype. Liz Whitefield (Delivery Manager), Rachel Woods (Product Owner), Chris Beardsell (User Researcher) and Kevin Adams (Lead Developer) pictured straight after their GDS Alpha review As they did they talked about what users had said, what changes had been made and what work we had done with partners to get the solution right. By the time we had finished the demo we had actually covered most of the points within the review criteria. The criteria are really helpful in ensuring you've thought about all the aspects of your project. I still came out thinking, "I forgot to mention..." But the criteria really help you to have a framework for ensuring you cover all the good stuff you and your team have done. It helps you as much as it helps the panel. It's an opportunity Lastly you can't underestimate the value of the independent critical eye. We’re still learning on our team and the review is a great opportunity to get some insight on how you’ve been working as well as what you’ve been working on and get some recommendations. I'm so proud of what the Secure Communications team has achieved in passing the review and moving into Beta. It was exciting to show our ideas and work to a group of people who really got where we were coming from. It’s a good feeling, so my last - and possibly most important - piece of advice is ‘enjoy it’!
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:18pm</span>
By C. Aaron CoisSoftware Engineering Team LeadCERT Cyber Security Solutions Directorate This post is the latest installment in a series aimed at helping organizations adopt DevOps. DevOps can be succinctly defined as a mindset of molding your process and organizational structures to promote business value software quality attributes most important to your organization continuous improvement As I have discussed in previous posts on DevOps at Amazon and software quality in DevOps, while DevOps is often approached through practices such as Agile development, automation, and continuous delivery, the spirit of DevOps can be applied in many ways. In this blog post, I am going to look at another seminal case study of DevOps thinking applied in a somewhat out-of-the-box way: Netflix. Netflix is a fantastic case study for DevOps because their software-engineering process shows a fundamental understanding of DevOps thinking and a focus on quality attributes through automation-assisted process. Recall, DevOps practitioners espouse a driven focus on quality attributes to meet business needs, leveraging automated processes to achieve consistency and efficiency. Netflix’s streaming service is a large distributed system hosted on Amazon Web Services (AWS). Since there are so many components that have to work together to provide reliable video streams to customers across a wide range of devices, Netflix engineers needed to focus heavily on the quality attributes of reliability and robustness for both server- and client-side components. In short, they concluded that the only way to be comfortable handling failure is to constantly practice failing. To achieve the desired level of confidence and quality, in true DevOps style, Netflix engineers set about automating failure. If you have ever used Netflix software on your computer, a game console, or a mobile device, you may have noticed that while the software is impressively reliable, occasionally the available streams of videos change. Sometimes, the ‘Recommended Picks’ stream may not appear, for example. When this happens it is because the service in AWS that serves the ‘Recommended Picks’ data is down. However, your Netflix application doesn’t crash, it doesn’t throw any errors, and it doesn’t suffer from any degradation in performance. Netflix software merely omits the stream, or displays an alternate stream, with no hindered experience to the user—exhibiting ideal, elegant failure behavior. To achieve this result, Netflix dramatically altered their engineering process by introducing a tool called Chaos Monkey, the first in a series of tools collectively known as the Netflix Simian Army. Chaos Monkey is basically a script that runs continually in all Netflix environments, causing chaos by randomly shutting down server instances. Thus, while writing code, Netflix developers are constantly operating in an environment of unreliable services and  unexpected outages. This chaos not only gives developers a unique opportunity to test their software in unexpected failure conditions, but incentivizes them to build fault-tolerant systems to make their day-to-day job as developers less frustrating. This is DevOps at its finest: altering the development process and using automation to set up a system where the behavioral economics favors producing a desirable level of software quality. In response to creating software in this type of environment, Netflix developers will design their systems to be modular, testable, and highly resilient against back-end service outages from the start. In a DevOps organization, leaders must ask: What can we do to incentivize the organization to achieve the outcomes we want? How can we change our organization to drive ever-closer to our goals? To master DevOps and dramatically improve outcomes in your organization, this is the type of thinking you must encourage. Then, most importantly, organizations must be willing to make the changes and sacrifices necessary (such as intentionally, continually causing failures) to set themselves up for success. As evidence to the value of their investment, Netflix has credited this ‘chaos testing’ approach to giving their systems the resiliency to handle the 9/25/14 reboot of 10 percent of AWS servers without issue. The unmitigated success of this approach inspired the creation of the Simian Army, a full suite of tools to enable chaos testing, which is now available as open source software. Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits please click here. To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here. To listen to the podcast DevOps—Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here. To read all of the blog posts in our DevOps series, please click here.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:18pm</span>
Adrian Stone I’m halfway through a 12-week Junior WebOps training program in DWP. It covers topics and skills that are essential to creating and maintaining user-centred digital services at DWP. I’ve completed another block and gained another huge stack of knowledge. It’s been a tough one for all of us, I think. At week 6 of 12, the fatigue is kicking in a bit. Don't get me wrong, it's great to be learning so much, and the course and camaraderie means it’s always fun, but the intensity can be exhausting. There is so much knowledge to take in, retain, embed - a restful weekend will do us all the world of good, I’m sure. Web programming Block 2 covered a lot of the ‘Web’ side of WebOps. This involved looking at how to use cutting-edge web technologies, as well as the importance of building websites for all kinds of platforms such as desktops, smartphones and tablets. It’s been fascinating to get an insight into how some of the websites we use every day actually look 'under the hood'. Understanding these technologies will be invaluable when it comes to building and maintaining systems on the live projects we’ll soon be working on. Version control We also spent some time learning version control using a system known as "Git". This is a simple but powerful version control tool. It enables developers and others working on projects to keep a tight control over any type of code, plus any associated documentation. The WebOps training room in the Leeds transformation hub Using Git, a developer can work on their code separate from the 'live' code. This is important when maintaining live systems used by tens of thousands of people. Being able to constantly test and iterate outside the live environment means we can be far more creative in our technical problem solving. Git is the de facto standard these days (and it's free, too). Embracing industry standards It’s great that government departments are embracing industry-standard tools now, as it means developers coming into government from the private or voluntary sectors can get straight to work using the tools they are already used to, and be productive from day one. WebOps training It’s also good for those of us in government, as we’re developing skills that will be relevant across the tech industry if we spend some time working outside government in future. Looking forward to looking back We finished this week, as we do every week, with an agile 'retrospective'. Here, we look back on the past few days’ teaching, thinking about what went well, what didn't go so well, and what could be improved upon. These sessions really help us to reflect on the week and consolidate our learning, and we all look forward to them. There’s the added bonus that by reviewing what we’ve been taught, how we’ve been taught it, and what could’ve gone better, we know that we are helping to improve the learning experience for ourselves and for future cohorts - always a nice way to end the week.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:18pm</span>
Most employees think flexibility is critical to their job satisfaction: are employers listening? The latest SHRM Job Satisfaction and Engagement survey confirms what many in HR already know: the flexibility to balance life and work issues is important to almost every employee - with 91% of workers polled saying it was either very important (55%) or important (36%) to their job satisfaction. Yes, there were differences between the sexes - 97% of women rated it as either very important or important compared with 85% of men but even...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:18pm</span>
By Christopher AlbertsPrincipal EngineerCERT Division Software is a growing component of systems used by Department of Defense (DoD), government, and industry organizations. As organizations become more dependent on software, security-related risks to their organizational missions are also increasing. Despite this rise in security risk exposure, most organizations follow a familiar pattern when managing those risks. They typically delay taking aggressive action to mitigate security risks until after a software-reliant system has been deployed (i.e., during the operation and maintenance of the system). This blog post highlights the Security Engineering Risk Analysis (SERA) Framework, a new approach developed by researchers in the CERT Division at the Carnegie Mellon University Software Engineering Institute to help organizations reduce operational security risks by proactively designing security controls into software-reliant systems (i.e., building security in up front, rather than retrofitting it as an afterthought).  Three Main Causes of Operational Security Vulnerabilities In examining the origin of operational security vulnerabilities, our team of researchers noted that they generally have three main causes: design weaknesses implementation/coding system configuration errors Over the years, significant effort (e.g., research, tool development, guidance) has been directed toward addressing vulnerabilities caused by implementation/coding issues and system configuration errors. As a result, implementation/coding vulnerabilities can be corrected during system operation and maintenance through the dissemination of security patches by vendors. In addition, secure coding practices, such as those developed by researchers on CERT’s Secure Coding Team, can proactively prevent the occurrence of certain implementation/coding vulnerabilities. System configuration vulnerabilities can be prevented (or corrected) by following accepted operational security practices, such as implementing appropriate authentication and authorization controls. The SERA research team—in addition to me, the team includes Carol Woody and Audrey Dorofee—believes that it is important to focus on correcting design weaknesses because they are so pervasive. MITRE’s Common Weakness Enumeration (CWE) provides a community-developed view of software weaknesses. As of February 2014, design-related issues account for 40 percent of the 940 total CWEs. In addition, 76 percent of the top 25 most-dangerous CWEs are design weaknesses. We are also focusing on design weaknesses because security is often neglected during early lifecycle activities. Addressing design weaknesses as soon as possible is especially important because these weaknesses are not corrected easily after a system has been deployed. Remediation of design weaknesses normally requires extensive changes to the system, which is costly and often proves to be impractical. As a result, software-reliant systems with design weaknesses often are allowed to operate under a high degree of residual security risk, putting their associated operational missions in jeopardy. Our initial research suggested that applying traditional security risk-analysis methods earlier in the lifecycle will not solve the problem because those methods cannot handle the inherent complexity of modern cybersecurity attacks. Traditional methods of identifying risk focus on the simple, linear view that assumes a single threat actor exploiting a single vulnerability in a single system to cause an adverse consequence. Our experience shows that most cyber-attacks are much more complicated. For example, consider the Target breach of late 2013. This attack was not the result of a single vulnerability that allowed the criminals to access tens of millions of credit cards and personal information including names, addresses, and other personally identifiable information. The cybercriminals instead targeted a subcontractor in the Pittsburgh area and exploited its infrastructure to gain trusted access into the Target infrastructure. From the initial entry point, they were able to exploit vulnerabilities in multiple systems in Target’s infrastructure to access the data that they wanted. Ultimately, this attack included multiple systems across two organizations. This complexity of the Target attack is not unique. Multiple actors often exploit multiple vulnerabilities in multiple systems as part of a complex chain of events. Our research indicated that a new approach was needed to handle these types of security risks. The solution that we developed, the Security Engineering Risk Analysis (SERA) Framework, focuses on minimizing design weaknesses by integrating two important technical perspectives: (1) system and software engineering and (2) operational security. The SERA Framework defines an engineering practice for analyzing risk in software-reliant systems that are being acquired and developed, with the ultimate goal of building security into those systems. The tasks specified in the framework are designed to be integrated with a program’s ongoing system engineering, software engineering, and risk management activities. Four Tasks of the SERA Framework The SERA Framework specifies the following four tasks: Establish operational context. The first task defines the operational context for the analysis. In this task, the Analysis Team (i.e., an interdisciplinary team of 3-5 people that leads the risk analysis) identifies the system of interest for the analysis (typically the system that is being acquired or developed) and then determines how the system of interest supports operations (or is projected to support operations if the system of interest is not yet deployed). The team develops models of the most critical workflows or mission threads that are supported by the system of interest.Each software application or system typically supports multiple operational workflows or mission threads during operations. The goal is to (1) select which operational workflow or mission thread the team will include in the analysis and (2) document how the system of interest supports the selected workflow or mission thread. The team develops additional models that describe the technologies that support each workflow or mission thread and how data flows among those technologies. These models establish a baseline of operational performance for the system of interest. The team then analyzes security risks in relation to this baseline. Identify Risk. The second task specified in the framework focuses on risk identification. In this task, the Analysis Team transforms security concerns into distinct, tangible risks that can be described and measured. Task 2 comprises the following steps: a. The team starts by reviewing operational models generated by the first task. The team then identifies a threat that is causing concern as well as the sequence of steps required for that threat to be realized. b. The Analysis Team then describes how each threat might affect the workflow or mission thread as well as selected stakeholders (i.e., established the consequences produced by the threat). c. Finally, the Analysis Team creates the narrative for the security risk scenario and compiles all data related to the scenario in a usable format.It is important to note that the steps specified in the second task must be performed for each risk that is identified. Analyze Risk. During this task the Analysis Team evaluates each risk in relation to predefined criteria to determine its probability, impact, and risk exposure. This evaluation involves several steps: a. Establish probability. A risk’s probability provides a measure of the likelihood that the risk will occur. In this step, the Analysis Team subjectively determines and documents the probability of the occurrence for the security risk scenario. b. Establish impact. A risk’s impact is a measure of the severity of a risk’s consequence if the risk were to occur. The Analysis Team analyzes and documents the impact of the security risk scenario. c. Determine risk exposure. Risk exposure measures the magnitude of a risk based on the current values of probability and impact. The team determines the risk exposure for the scenarios based on the individual values for of probability and impact documented in the previous two steps in this task. Develop Control Plan. The fourth task in the framework focuses on establishing a plan for controlling a selected set of risks. The Analysis Team first prioritizes the security risk scenarios based on their risk measures. Once priorities have been established, the team determines the basic approach for controlling each risk based on pre-defined criteria and current constraints (e.g., resources and funding available for control activities.) For each risk that is not accepted, the Analysis Team develops a control plan that indicates • how the threat can be monitored and the actions taken when it is occurring (recognize and respond) • which protection measures can be implemented to reduce vulnerability to the threat and minimize any consequences that might occur (resist) • how to recover from the risk if the consequences or losses are realized (recover) A subset of the control actions will have implications for the software (or system) requirements and design. The team must determine which actions might affect the requirements or design of the system of interest and document them for further analysis. Our technical note, Introduction to the Security Engineering Risk Analysis (SERA) Framework, provides examples for all four SERA tasks. Key Differentiators The SERA Framework incorporates three key features that differentiate it from other security risk assessments: Use of operational models. Traditional security-risk assessments rely on a tacit understanding of the operational context in which a software-reliant system must operate. Our experience indicates that tacit assumptions are often incorrect or incomplete, which adversely affects the results of a security risk analysis. The SERA Framework uses operational models to describe a system’s operational context explicitly and establish a baseline of operational performance to inform the risk identification and analysis. Semantic structure to document security risks. Most traditional assessments rely on linear, simplistic formats for recording risks (e.g., if-then statements). These basic structures fail to capture the complexities and nuances of modern cybersecurity attacks. To address this deficiency, the SERA Framework uses scenarios to document a cybersecurity risk and create fairly complex risk scenarios. A security risk scenario conveys information describing how one or more threat actors can exploit multiple systems to cause adverse consequences for stakeholders. A scenario essentially chains multiple basic risks together to describe how an attack might actually occur in the real world. Shared view of a system, its operational context, and its associated security risks. The SERA Framework presents a view of the system that is easily understood by multiple stakeholders including system and software engineers, security experts, and program managers. As a result, complex security risks can be evaluated effectively and then prioritized based on their impact to the operational mission of the system. These differentiators distinguish the SERA Framework from other security risk assessment of analysis approaches. They also provide the basis for analyzing complex, multi-faceted security risk scenarios early in the acquisition lifecycle. Collaborations We collaborated with Dr. Travis Breaux of CMU’s Institute for Software Research in the School of Computer Science and a renowned expert in the fields of risk management and security requirements. One aspect of Dr. Breaux’s research is exploring the roles that that expertise plays in eliciting risk. We were able to build upon this research as we developed the SERA Framework. Looking Ahead and Future Work Software is a growing component of systems across all government and industry sectors. The SERA Framework is a first step in our ongoing research to help program managers establish reasonable confidence that (1) a software-reliant system will function as intended when deployed and (2) its cybersecurity risk will be kept within an acceptable tolerance over time. The initial results from applying the SERA Framework are promising. However, we have many additional research and development activities to complete. Here, we briefly highlight two of those activities. The first is using threat archetypes (i.e., a pattern or model illustrating the key characteristics of a complex threat scenario) to facilitate risk identification. The second is working to help organizations comply with new legislative mandates, such as National Institute of Standards and Technology (NIST) 800-37, Guide for Applying the Risk Management Framework to Federal Information Systems, which provides guidance for applying the NIST Risk Management Framework (RMF) to federal information systems. We welcome your feedback on our research. Please leave feedback in the comments section below. Additional Resources For a more detailed explanation of our research please read our recent technical note, Introduction to the Security Engineering Risk Analysis (SERA) Framework, which is available here.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:18pm</span>
DWP Technology is delivering new digital technology to meet user needs - as shown by the Carer’s Allowance Digital Service which now has 90% user satisfaction. Like all businesses, we also need to keep an eye on performance of our existing, complex, large enterprise technology estate. Our pensions, disability services, working age benefits, child maintenance and Universal Credit technology is used to annually pay £165bn for 7 million claims. These services are running at 100% uptime this year. But the systems used by 90,000 staff across 950 locations to interact with 22 million citizens have not been updated for decades. Systems were designed in isolation resulting in integration issues accumulating over the years - preventing Operations colleagues from working efficiently. The last 80 days have seen an 82% productivity improvement as a result of resolving the top outstanding IT issues. Five things stood out in the retrospective: Acknowledge the problem: Three months ago, 41 of our best people came together and acknowledged that we were not delivering a service to be proud of. You had to be in the room to feel the determination to drive change. A dedicated, integrated team: We stood up a multi-disciplinary team across organizational boundaries, with DWP experts working alongside colleagues from HP, Accenture, IBM, TCS, Atos and Cap Gemini. This team had a singular goal to resolve 40 longstanding issues. Expertise triumphed over hierarchy and contracts as a group of colleagues became a team. Begin and end with user needs: Team members sat with users to understand the issues from their perspective. These conversations helped us understand what would make the most difference to Operations users. As sprints delivered solutions, the users tested solutions in the real world across offices. Sprint to outcomes: Ambitious goals were broken down into manageable pieces of work tackled via specific sprints. Daily calls and stand-ups were used to drive progress, identify and tackle blockers, share knowledge and support problem solving to deliver at pace. Conversations were about delivering specific things, not planning to deliver. Leadership: A  group of leaders was inspired to achieve the impossible and followed through with tenacity to deliver impossible goals. The team built big relationships to listen to each other and do more together than what anyone could have done within their team. Our strategy is to deliver The results speak for themselves - in the last 80 days service hours lost due to IT issues across our IT estate have plummeted by 82%. 40 top issues have been fixed and the team also delivered: Automated interfaces to remove rekeying and errors between 2 of our major customer data systems Automated production of reports Reduced processing time by up to 1 hour per case for a system while eliminating slow-running transactions on another system, saving 15 seconds per case Reduced the time to logon to virtual desktops by over 50%. I am proud to have the opportunity to work with colleagues delivering exceptional results. Improving service to our Operations colleagues helps them to deliver excellent service to DWP’s customers.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:17pm</span>
By John Klein, Senior Member of the Technical StaffSoftware Solutions Division Acquisition executives in domains ranging from modernizing legacy business systems to developing real-time communications systems often face the following challenge:Vendors claim that model-driven engineering (MDE) tools enable developers to generate software code automatically and achieve extremely high developer productivity. Are these claims true? The simple answer might be, "Yes, the state of the practice can achieve productivity rates of thousands of function points and millions of lines of code per person-month using MDE tools for automatic code generation." The complicated reality is that MDE consists of more than code generation tools; it is a software engineering approach that can impact the entire lifecycle from requirements gathering through sustainment. While one can make broad generalizations about these methods and tools, it is more useful to consider them in the context of a particular system acquisition. Aligning MDE methods and tool capabilities with the system acquisition strategy can improve system quality, reduce time to field, and reduce sustainment cost. On the other hand, when MDE methods and tools do not align with the acquisition strategy, using them can result in increased risk and cost in development and sustainment. This blog post highlights the application of MDE tools for automatic code generation (in the context of the full system lifecycle, from concept development through sustainment) and also provides a template that acquirers can use to collect information from MDE tool vendors. Foundations of Our Work: AADL Researchers at the SEI have been doing work in MDE for several decades. In particular, Peter Feiler and Julien Delange have been working with the Architecture Analysis and Design Language (AADL) modeling notation for use in real-time embedded systems, safety-critical systems, and other high-assurance systems. Their work has focused on analysis and an up-front assurance that the system will function as intended, with less emphasis on code generation. This latest SEI MDE effort—in addition to me, the team included Harry Levinson and Jay Marchetti—focuses more on code generation, specifically in the context of business systems with code generation benefits realized by the developer. As detailed in our technical note, Model-Driven Engineering: Automatic Code Generation and Beyond, while certain domains can achieve extremely high productivity using model-driven approaches, it is important to realize that code generation is just one small piece of the entire software lifecycle. In software engineering, there is a tight coupling between the system domain (e.g., business system, command and control system, or avionics system), the methods used throughout the system lifecycle, and the tools used to support the chosen methods. Furthermore, government acquirers have the challenge of selecting contractors to develop their systems. This selection process includes evaluating the development team, the development methodology, and the tools in the context of the system acquisition strategy. Or, to state it more simply, in MDE, if you develop code using one tool it can be expensive to switch to another tool later in the software development lifecycle. In addition to acquiring code, therefore, software acquirers should also consider the tools needed to sustain the software. When an organization adopts a model-driven approach, it is also adopting a particular toolset and technology. These additional adoptions are of particular concern in the Department of Defense (DoD), where the focus is on acquiring and maintaining longer-lived systems. In many commercial contexts, there is less hesitation to rebuild a system from scratch. Code Generation in Business Systems Firms such as Gartner, Forrester, and IDC have focused their analyses of MDE technology on commercial IT developers and software providers. As stated earlier, our analysis focused on the unique acquisition concerns of the DoD and other federal agencies in which systems are acquired and maintained for longer periods of time. Specifically, we examined business systems since this is an area where code generation tools are having significant impact. This analysis included existing technologies and approaches used for commercial off-the-shelf technologies (COTS), and it investigated how those same principles can be applied to the acquisition of MDE tools. We used the PECA Method (Plan, Establish Criteria, Collect Data, Analyze Results) to organize an acquirer’s technology assessment, and used an established risk framework to identify criteria within the overall process. Acquisition Strategy Implications An acquisition strategy specifies which artifacts and data rights to acquire, as well as which artifacts to evaluate at each program decision point. The acquisition strategy also defines the approach to identifying, managing, and mitigating program risks. The use of MDE for automatic code generation has several implications for acquisition strategy, including the following: artifacts, data rights, and licenses. Development tooling is usually not a significant concern when acquiring software-intensive systems, but it can be a significant concern when acquiring a system developed using an MDE approach, particularly when using MDE for automatic code generation. In MDE-developed software, the models are the primary development artifacts, embodying the software architecture design and component designs, and ultimately driving the automatic code generation. Ideally, all software sustainment and evolution will also use the MDE approach, which requires data rights and necessary licensing for the tools, models, and generated code. When only the automatically generated source code is acquired (without the models used to generate the code), then sustainment and evolution are more difficult because the automatically generated code is not structured for human readability and comprehension. design review scope and timing. The acquirer must review and evaluate appropriate artifacts at the right time in the acquisition cycle. For example, in an approach using MDE for automatic code generation, the software architecture documentation may consist of a subset of the code-generation model, along with accompanying documentation, to provide context and design rational. The software architecture should be evaluated early in the design process (as discussed by Bergey and Jones). The evaluation scope and criteria, however, may need to be expanded to account for the use of the model not just to represent the architecture for communications among stakeholders but also to directly generate the executing software. Finally, reviewers may need to use the MDE tools to view the models—exporting the model into a generic format, such as portable document format (PDF) files, may not provide adequate visual resolution and the ability to efficiently navigate through the model. Tool availability and access to the network where the model is stored become issues that the acquirer must address in planning the evaluation. impact on program risk. While an MDE approach promises automatic code generation, improvement of cost and schedule, reduction of technical risk by enabling early analysis, and the ability to demonstrate capabilities and validate requirements by using executable models or rapid prototypes, it also introduces new risks, including the following (see our technical note for a more detailed accounting of risks introduced by an MDE approach to automatic code generation.): - a development-time dependency on the tool chosen to support the process. The chosen tool is used to create and modify the model, which is then processed to generate the code. Unlike traditional source code, which can be created and modified by many different tools, the state of the market for MDE tools is that, in most cases, a model can be edited and modified only by the tool that created it, and changing tools may require rebuilding the entire model. - cybersecurity assurance. As noted earlier, development time and run-time dependencies have several implications for cyber assurance. For example, as cybersecurity policy and practices evolve, the tool may generate compliant code (e.g., code that is compatible with required authorization mechanisms, access control policies, or encryption practices). - run-time portability. Portability concerns manifest as a desire to execute the automatically generated code in several environments, each comprising different hardware and software infrastructure. These concerns may also manifest as a desire to change the system’s hardware and software infrastructure over time. - runtime performance. The automatically generated code must satisfy the system’s runtime throughput, latency, concurrent request processing, and other performance quality requirements. While the use of an MDE approach may provide early confidence that these requirements can be met, if one or more of these requirements change, there is a risk that the automatically generated code may not satisfy the new requirement. - usability of generated user interfaces. In some system domains, such as business systems, the MDE tools may generate user interfaces as part of the automatically generated code. The generated user interfaces may support functions such as system configuration and administration, system monitoring, and end-user activities. A Template for Collecting Information from MDE Tool Vendors To help business system acquirers select and evaluate MDE tools, we have created a questionnaire template to use during the "Collect Data" step in the process, which can be downloaded here. Our accompanying technical note provides detailed guidance on how to interpret responses during the "Analyze Results" step. To develop the template, we started by reviewing guidance about how to develop criteria for developing a tool based on your program-specific acquisition issues. We wanted to understand particular risk areas that may or may not be relevant for a program. We also wanted to understand how the use of MDE tools could help mitigate some risks but also introduce or increase other risks. We also relied on earlier SEI research that created a risk taxonomy. We used that taxonomy to examine how MDE approaches can either help mitigate some of those risks or may introduce risks to a program. Wrapping Up and Looking Ahead Our premise through all of our analysis of MDE methods and tools is that it is impossible to make broad-brush statements that are true for all programs. There are many mitigating factors, including the goals of the program the context that a program is working in high priority objectives existing risks Our analysis and risk taxonomy can help programs decide whether MDE approaches can help or hurt an organization, specifically whether a particular approach will fit with a program’s risk profile and goals. MDE provides the opportunity to reduce development costs, improve the quality of the software developed, and possibly increase the agility of the development process. Programs can realize these benefits only if the concept fits with their acquisition strategies. MDE tools that specialize on a particular type of system provide high productivity but solve only a very narrow type of problem. Our analysis found that the narrower the scope of an MDE tool, the more that tool is tied to a vendor. We welcome your feedback on our work. Additional Resources You can learn about our research on MDE by reading the technical note Model-Driven Engineering: Automatic Code Generation and Beyond. The template for the accompanying questionnaire can also be downloaded from this site.    
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:17pm</span>
The 2015 SHRM Annual Conference & Exposition is fast approaching and whether you’re a first-time attendee or an experienced conference veteran, there will be something for everyone at this year’s event. As thousands of SHRM members, media outlets and exposition vendors make plans to travel to Las Vegas June 28-July 1, SHRM will be working around the clock to prepare yet another amazing experience for attendees. It’s difficult to describe the excitement and electricity generated by the thousands of HR pros that attend each year. Maybe it’s the dynamic...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:17pm</span>
BBC Broadcast Centre The latest round of user research for the Access to Work team takes us to the heart of the BBC at the Broadcast Centre in White City, London. The thought of visiting the BBC, for me at least, conjures up thoughts of bumping into celebrities around each corner and perhaps getting the chance to contemplate Shep’s memorial in the Blue Peter Garden. Whilst the reality is slightly more mundane, in terms of gathering insights about how the deaf community interact with our services it proves an invaluable exercise and provides a wealth of insights. We are here to meet a couple of BBC employees who are both profoundly deaf and also existing users of the Access to Work scheme for the provision of British Sign Language (BSL) interpreters in their day to day jobs. Access to Work is a government programme delivered by Jobcentre Plus which provides advice and a financial grant for practical support to overcome work-related barriers due to disability. It is available to customers with a disability who are in paid employment or who are about to start a job. Both employees work on See Hear, the BBC flagship magazine programme for the deaf and hard of hearing. The programme has been running since 1981 and has won several awards in recognition of its services to the deaf community. I think it is fair to say that both users have, at best, mixed feelings about Access to Work. Both acknowledge that the scheme provides valuable support for the help they need in the workplace, however both are equally critical of the time it can take to process claims. The perceived "clunkiness" of the process makes life difficult for their Production Management Assistant (PMA) who liaises with DWP each time a claim is lodged. Conscious of this, we return to speak with the PMA in the near future because understanding what the employer needs from a digital service will be equally important as we take things forward. The background duly set, I proceed to let both users talk me through their thoughts via their BSL interpreters as they work through the Access to Work prototype. I want to observe how they interact with the service and discern their user needs. The immediate insight is that the experience can be different for different levels of deafness. For example, one user has very slight hearing with the use of an aid and also has a good knowledge of English. As such he moves through the form with ease. He is also comfortable giving an email address and is also content that he is giving his consent for future correspondence via email when he checks the tick box. The other user has BSL as her first language as she has been profoundly deaf from birth and English is not her first language. She consequently struggles with English and her interpreter has to help her with the form. Email will not be an option for her in terms of further communication. The prototype lets her appoint a third party to take calls on her behalf should further contact be required but for her this really needs to be a BSL interpreter. The insight is that we need to make this clearer in the form so we can meet this particular need. As there will be other users out there for whom BSL is their first language we will be looking to research with more of them to refine these needs and ensure our service can meet them. All in all, a very interesting and useful day. Both users like the prototype and return to work full of enthusiasm about the strides being taken by DWP to convert Access to Work into an online service, so much so that they are preparing a See Hear blog to reflect their experience! For us the next few weeks will see more research with both employees and employers across the spectrum of disabilities embraced by Access to Work as we move towards GDS assessment. So lots of work still to do but today feels like real progress.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:17pm</span>
By Hasan YasarTechnical ManagerCyber Engineering Solutions Group This post is the latest installment in a series aimed at helping organizations adopt DevOps. The federal government continues to search for better ways to leverage the latest technology trends and increase efficiency of developing and acquiring new products or obtaining services under constrained budgets. DevOps is gaining more traction in many federal organizations, such as U.S. Citizenship and Immigration Services (USCIS), the Environmental Protection Agency (EPA), and the General Services Administration (GSA). These and other government agencies face challenges, however, when implementing DevOps with Agile methods and employing DevOps practices in every phase of the project lifecycle, including acquisition, development, testing, and deployment. A common mistake when implementing DevOps is trying to buy a finished product or an automated toolset, rather than considering its methods and the critical elements required for successful adoption within the organization. As described in previous posts on this blog, DevOps is an extension of Agile methods that requires all the knowledge and skills necessary to take a project from inception through sustainment and also contain project stakeholders within a dedicated team. Successful implementation of DevOps in the federal government is possible thorough collaborative involvement of all stakeholders, the development of governance in regards to infrastructure, and equipping operational staff with DevOps skills. As the DevOps process gets used more and more in software development by industry firms, the common DevOps culture, automation, measurement, and sharing (CAMS) theme is used. In this post, we will begin to address some of the barriers and identify ways to enable the implementation of DevOps philosophy in the federal government space. So, where do we start? First Step: Acquisition Process The first and most important step of DevOps implementation starts in the acquisition process. The traditional government approach to any system acquisition is the waterfall model. Unfortunately, this model often starts with the development of rigid requirement specifications and sets the project on a path for failure on schedule, technical objectives, and overall project cost. In addition, acquiring the entire system at once can result in design, testing, and integration problems. In some cases, the expected product may be outdated by the time it reaches production use. In the waterfall model, there is no easy way to address requirements changes during the development or deployment stages unless the teams re-do previous tasks, which is expensive and time-consuming. Complex and software-intensive systems require a shift away from traditional waterfall models to more Agile methods. Agile methods are more effective for focusing development cycles, iterative delivery, and managing costs for actual business needs. Government organizations must remove any barriers during the acquisition process. Some barriers that prevent acquisition staff from operating more rapidly are rigid requirements and timelines delivery methods lack of formalized integration testing plan On the other hand, following Agile methods can enable organizations to see smaller, but successful incremental results, and receive early feedback on delivered capabilities, which allows early problem resolution if there are any misinterpretations of tasks. Additional information on adopting Agile methods during the acquisition process is available in previous blog posts and ongoing work by SEI researchers to help acquisition professionals in the federal government. While adapting Agile methods into the acquisition process, there are also requirements for cultural involvement, and preparation to support DevOps principles, such us continuous delivery, integration, automation, and measurement. During the contracting phase, all project team stakeholders should provide input on the contract, as well as functional requirements. As we mentioned earlier, the main principles of DevOps aim to bring the operational team together with developers, so effective communication is the cornerstone of this union. More specifically, communication between everybody is necessary: the program management team, operational team (including IT, maintenance, support, and operational testing), federal agency security experts, system architects, and legal staff among others. While it may take more upfront work to secure everyone’s input early in the contractual process, mistakes can be found and addressed before fingers hit keys for development. The input the acquisition department must have includes, but is not limited to, the following: system framework security policy/technical implementation guidelines (access control, required OS, etc.) operational testing requirements and automation testing scripts/tools identification of development platform and needs iteration cycles and delivery methods module/system integration requirements and platform/tool definition communication methods and establishing visibility across all project teams Also, automating the delivery of each capability and how the continuous integration with other systems will occur should be addressed during the acquisition process. Automation will enable the federal government to see status and capability as early as possible and allow stakeholders to gain a better understanding of the system regardless of functional state and pivot quickly should it require adjustment. After delivering a successful module, end-users will be able to begin working with new capabilities and become familiar with the system while the rest of the features are still being developed, tested, and deployed. Future posts on DevOps in government will explore cultural changes, governance structure, and automation and measurement as part of this topic in our bi-weekly DevOps blog series. Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits, please click here. To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here. To listen to the podcast DevOps—Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here. To read all of the blog posts in our DevOps series, please click here.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:17pm</span>
Over the past several months, it has been my great pleasure to get to know SHRM members and others through a series of conversations around Sheryl Sandberg’s book Lean In. This effort began in March when I posted on SHRM’s LinkedIn page to ask for volunteers to take part in a kind of virtual book club. More than 75 people responded. Sandberg had been slated to keynote the SHRM Annual Conference & Exposition next month, but she withdrew in the wake of the recent loss of her husband Dave...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:16pm</span>
Marimba! Marimba! MARIIIIIMBAAAAAAAA!  My phone kept ringing wildly as I approached the movie theater this past Thanksgiving.  As many of you have guessed by now, I love the movies and I was on my way to see Horrible Bosses 2 (horrendous, I know) when I was taken down a different road by my cousin back in Miami.  I answered and he uttered, "You won’t believe the story I have for you. This is absolute MADNESS!"  ...
SHRM   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:15pm</span>
By Suzanne Miller Principal Researcher Software Solutions Division In 2010, the Office of Management and Budget (OMB) issued a 25-point plan to reform IT that called on federal agencies to employ "shorter delivery time frames, an approach consistent with Agile" when developing or acquiring IT. OMB data suggested Agile practices could help federal agencies and other organizations design and acquire software more effectively, but agencies needed to understand the risks involved in adopting these practices. Two years later, OMB directed agencies to consider Agile development in its 2012 contracting guidance. As organizations work to become more agile, they can employ the 12 principles outlined in the Agile Manifesto to assess progress. I work with a team of researchers at the SEI who explore the barriers and enablers to applying Agile in government settings. We have found that each of these principles plays out differently in the federal landscape. While some principles are a natural fit, others are harder to implement. This blog post introduces a series of discussions recorded as podcasts about the application (and challenges) of the 12 Agile principles across the Department of Defense (DoD). First Agile Principle: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Below is an excerpt from our podcast: Mary Ann: The problem I see is for those in the field, if this is already a fielded system, or even if it’s being developed, they can’t deal with that frequent of a release. So, they lump releases together, and then they send them out every 8 months, 9 months, 18 months, whatever it is that works for them from a deployment viewpoint.Suzanne: So, even though we’re producing valuable software as a program, we may not actually get to deliver it, in the way that commercial settings might be able to deliver it.Mary Ann: Correct.Suzanne: I’ve seen what we call a "sandbox" as one of the ways that we deal with that. So, in a sandbox setting, I will put each iteration’s software into what we call "a sandbox area." That area could allow user access, but it isn’t a full deployment. To listen to the complete podcast, please click here. Second Agile Principle: Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage. Below is an excerpt from our podcast: Suzanne: One of the ways to get it out to the field faster is to not presuppose that all of the requirements that we think of at the beginning have to be designed and implemented. We need to prioritize them in a way that allows us to get something out there that the people can try, so the learning can occur.Mary Ann: One of the key things, if you’re going to use Agile methods, is have enough definition up front of what you want to do, but not so much detail that you can’t learn, that it can’t change, because your environment changed. To listen to the complete podcast, please click here. Third Agile Principle: Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. Below is an excerpt from our podcast: Suzanne: What we see in a lot of acquisitions is we get lots of deliveries, but the early deliveries are more documentation, more review meetings, more review meeting slide decks. That’s a really different focus for a lot of people in the contract settings where, you know, working software comes after all that stuff is delivered. So this is saying, Don’t wait to actually deliver working software. So, you’ve got to really go from a document-centric lifecycle, and view of the world, to an implementation-centric focus. Mary Ann: That’s true. Suzanne: And that’s a culture change for the acquisition systems also, isn’t it? To listen to the complete podcast, please click here. Fourth Agile Principle: Business people and developers must work together daily throughout the project. Below is an excerpt from our podcast:  Suzanne: So, from their view point, business people are essentially the marketing people that understand what the market is, what market they are trying to penetrate. It’s the end users who are actually going to use the product. So, they are looking at business people in a little different way than we do in the DoD. So, they don’t necessarily have quite that same difference between he who pays for it or she who pays for it and the person that’s using it. Mary Ann: And, the thing is with the DoD environment, you obviously have to have the acquirers because those are the people that are trained to do the acquisition. And, they have the warrants, if you will: the permissions and the legal authorities to do it. However, they need to be working with the end users. And, typically, in a traditional environment, they do. They go out. They gather all the information from all the different end users, and they can be multiple groups. To listen to the complete podcast, please click here. Fifth Agile Principle: Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. Below is an excerpt from our podcast: Mary Ann: …Trust, but verify. That’s very much the moniker of today. However, some organizations trust more than others. In many cases trust is just not there at all. In that environment it would be very difficult to do an agile-type of development without a lot of change in culture.Suzanne: So, you and I have both seen some settings where a development organization, usually a contractor, is trying to use Agile principles. Yet, at the same time, they are being asked to do, at least the same if not more, documentation than they had in the past. They are asked for the detailed team metrics not just the typical management metrics. They are being asked for a lot of information that would make you think they are not very trusted. In those settings we’ve seen not very good success with Agile methods. I would assert that that’s actually one of the reasons—that there isn’t a feeling of trust. To listen to the complete podcast, please click here. Sixth Agile Principle: The most efficient and effective method of conveying information to and within a development teams is face-to-face conversation. Below is an excerpt from our podcast: Suzanne: If the primary means of communication is phone only—without some other screen sharing, without some other way of understanding what’s going on or tons of email and not a lot of even voice communication—you really are reducing the bandwidth, and you are going to reduce the ability of the team to deal with problems and to deal with the issues that inevitably come up because that’s when you need people to have your back.This is a principle that in my mind—you can get support for it, I think, better than maybe some of the other principles—but you’ve got to ask for it. You’ve got to know that that’s what you need to pay attention to, and you’ve got to pay attention to it.Mary Ann: Well, and getting the support may require a little bit of investment in infrastructure because not everybody will have those tools available. To listen to the complete podcast, please click here. Seventh Agile Principle: Working software is the primary measure of progress. Below is an excerpt from our podcast:  Suzanne: There is a huge amount of time where you are working on design documents, working on requirements refinement, working on interface descriptions, working on everything, working on essentially everything but the software itself. So, this mindset that working software essentially takes precedence over some of the other artifacts that we are accustomed to. This is a very big shift for our DoD audience.Mary Ann: It is a big shift. The other thing that this makes me think of when you start talking about measuring, most of the very large systems by mandate are required to do earned value management. It is a fairly rigid system where everything’s defined and you don’t change things. But, if you’re going to do it on working software…Suzanne: …and, if you’re allowed to change requirements at the lower level.Mary Ann: …and, you’re allowed to change things, then how does that go? That has all kinds of implications on how you do that. To listen to the complete podcast, please click here. Eighth Agile Principle: Agile processes promotes sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Below is an excerpt from our podcast:  Mary Ann: There is one program we are aware of—they are into mostly what we would probably consider more sustainment—the software is already out in the field, but they are enhancing it. They are updating it and fixing bugs and so forth. Every cycle they do, they do it by release. They have 1,200 (what they call) "program points," and their users know they have 1,200 points, and so they know how much each of their wish-list things are worth. There is a lot of horse trading going on behind the scenes, so they get their 1,200 points, but they get what they need. But, they know we only can get 1,200 points—no more, no less—and it is constant.Suzanne: Part of this is setting up reasonable expectations with the end users and the customer community as well as the sponsor community. If there is a good understanding between the developers and the sponsors of the project as to what actually can be accomplished, then we have got a chance at sustainable development. To listen to the complete podcast, please click here. Ninth Agile Principle: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Below is an excerpt from our podcast: Suzanne: We talk about Agile allowing good teams to really have superior performance. That is one of the basic aspects of Agile; there is an assumption that the people on the team are competent at what they do. So, you have got to have that competency for coding. You have got to have some competency for design, for detailed design. You have got to have some competency in unit testing. You have got to have some competency in integration. So, there are assumptions about what kinds of things your cross-functional team is capable of, and those are things that if you have them, it will enhance what you are able to do. If you don’t have them, you are going to build technical debt. You are going to build in defects….Mary Ann: It gets into the question that a lot of people say, Well, a good Agile team has to be very highly skilled. Bring your A game, if you will, your A players. An average kind of guy really won’t play well in an Agile team. Well, that’s not true. People will rise to what you expect them to do. To listen to the complete podcast, please click here. Tenth Agile Principle: Simplicity--the art of maximizing the amount of work not done--is essential. Below is an excerpt from our podcast: Mary Ann: It means maximizing on your return on investment, and getting the value you need, and then determining if some of the bells and whistles aren’t needed.Suzanne: We are also talking about the difference between creating complex architectures and complex ways of solving a problem and looking at, What is the simplest way?Often, when we say, What is the simplest way to do something? we actually stop having to do a lot of the work that goes along with implementing the complex way. So, the simplicity is not just about looking at getting value, it is also about reducing complexity. We are very good as engineers in figuring out lots of convoluted ways to make things work. So, this is really saying,  Don’t go there if you don’t need to. I go back to the old Einstein quote, Make everything as simple as possible; but no simpler. To listen to the complete podcast, please click here. Eleventh Agile Principle: The best architectures, requirements, and designs emerge from self-organizing teams. Below is an excerpt from our podcast:  Mary Ann: When you hear the principle best architectures, requirements and designs emerge from self-organizing teams, any self-respecting DoD manager would run screaming from the room. What do you mean emerge from a self-organizing team? Oh my gosh! You have to understand what those terms mean. That is the key to understanding what this means. It doesn’t mean chaos and people are just saying, Go do something. Heavens no. Self-organizing means you give the team boundaries. And, say, OK. Here is the problem. You give them an initial skeleton, if you will, of an architecture. You don’t just say, Go make one up. It is stuff that people would call maybe sprint zero, depending on who you talk to. Then, you let them go solve the problem. Suzanne: Let’s talk a little bit about the architecture thing because that, in the larger Agile community, has been a topic of debate for some years. Although I think we are seeing some resolution of that where most Agile teams in commercial settings are starting to acknowledge the role of architecture, not just emergent architecture but actually some design up front. They call it just enough design up front as opposed to big design up front. To listen to the complete podcast, please click here. Twelfth Agile Principle: At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. Below is an excerpt from our podcast:  Suzanne: One of the things you often have when you get into DoD settings is to have multiple teams running. So, one of the things that teams in DoD settings have to be aware of is improvements locally to their own work may affect others. Usually at release time is when you will do larger retrospectives where you look across the teams that are working on a release and get everybody together and say, What do we need to change as a whole group, not just as an individual team?Mary Ann: That is true. Release time is when they usually do that, but you might want to consider doing it at the end of iterations or sprints. Because, for instance, say you have three or four different teams. And, team A and team B worked really well together, and they had some kind of cool technology they were using. But, team A and C didn’t have that, but they were doing similar kinds of interfaces. They might want to identify that and say, Why don’t we use this for our interface too or our tool?That way you are not waiting until the very end of the release to upgrade the whole group as opposed to just your team. It gets a little more complicated but it’s like Scrum of Scrums.Suzanne: This is one of the roles of a Scrum Master, to help identify these places where as the different Scrum Teams come up with things, the Scrum Masters get together and help identify where these opportunities are. That is one of the important things about this: tuning, this idea of tuning our work, is about taking advantage of the learning in the moment. It is really about that plan-do-act-check cycle over and over and over again and getting people accustomed to looking at their work from that viewpoint. To listen to the complete podcast, please click here. We welcome your feedback on this series as well as ideas for future topics. Additional Resources This series of podcasts exploring the application of the 12 Agile principles across the Department of Defense is available in its entirety at sei.cmu.edu/podcasts/agile-in-the-dod.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:15pm</span>
Simon Hurst, User Researcher My name is Simon and I’m the user researcher on the Personal Independence Payment (PIP) Digital project. It’s my job to make sure the team is fully aware and appreciative of who our users are. As Leisa Reichelt wrote recently: "You are not your user and you cannot think like a user unless you're meeting users regularly" Our users aren’t ‘just’ statistics either: They aren’t ‘just’ one of several 100,000 PIP claimants They aren’t ‘just’ one of the 11 million people in the UK with a long term health condition or disability They aren’t ‘just’ in a family where 21% of children in families with at least one disabled member are in poverty. Meeting user needs during user testing When we’re designing services that meet the needs of DWP’s customers, we meet our users regularly. For PIP, we take extra steps to give our users the right level of support when we invite them to test our services. Our users may need to bring a carer or family member; they might need to take breaks during a user testing session; our users might need to test the service with assistive software or on their own devices; and they may prefer us to work with charities who are already supporting them, or for us to visit them in their own home - we’re able to meet all these needs. Understanding our users’ lives Our users are real people, with real lives, families, friends and goals. They are John. I met John and his daughter recently at a user research session. John gave me permission to tell his story and when he saw this blog he said "I’m amazed at how well you’ve captured my story". John is 59 and a father and grandfather. John used to be a goalkeeper in a football league club in his youth. In one game, John saved two penalties for the youth team. The opposition striker wasn’t best pleased and kicked him in the back, bursting two of his vertebrae. This ended his chance to be a professional footballer, and also made it difficult to pursue his second career choice, as a joiner, so he set up a successful business. The injury resulted in the degeneration of his nerves, and he knew it would deteriorate throughout his life. The rheumatoid arthritis he developed in his teens didn’t help. In the last few years this has prevented John from working, he had to sell the business he spent over 20 years building and that he loved being a part of. Selling his business meant he lost the lifestyle he took for granted - John had to get rid of his car, downsize his house and stop taking the holidays that meant so much to him and his wife. This affected his mental health so much he became irritable and aggressive, he’d have "argued with a brick wall". After his heart attack John realised he needed to get help and was referred to a psychiatrist, who helped a great deal. John was too proud to apply for Disability Living Allowance (DLA). He was having difficulty coming to terms with the fact that he could no longer do a job he loved and that applying for DLA would be like "admitting and accepting his condition". His wife and his daughter had to apply for him. He’s now concerned that because he loses the feeling in his hands he can’t be sure he isn’t gripping his grandson’s hand too tightly or not tightly enough. John worries that he’ll hurt his grandson, or not hold him tightly enough when they’re walking near the road. To quote Leisa again: "needs can be functional things people need to do, for example, to check eligibility. Needs can also be emotional, perhaps people are stressed and anxious and they need reassurance.". Designing services to meet users’ needs We can use John’s story to design a better service. We can find out what he needs to do and then try our best to get out of his way. We can make sure our designers and developers are building a service that John can use. We can work to ensure the language and tone support and reassure John and that the questions we are asking him are clear and transparent. We know from research that when people are completing an application for PIP they can get tired, or they need to take a break to take some medicine, or they lose concentration. People also want to ‘sleep on it’, they have a first go at their answers then rework them over several days. And they really want to keep a copy of what they’ve sent us. Knowing this, we can design a service that considers this and allows people to save their application We’ll continue to meet our users regularly as we design the PIP digital service. We share our findings with people involved in delivering the service too, to help them to understand John’s life and how we need to design a service that helps John and all users to get to their goal quickly and easily.
DWP Digital   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:15pm</span>
By Tim Palko Senior Member of the Technical StaffCERT Cyber Security Solutions Directorate This post is the latest installment in a series aimed at helping organizations adopt DevOps. Some say that DevOps is a method; others say it is a movement, a philosophy, or even a strategy. There are many ways to define DevOps, but everybody agrees on its basic goal: to bring together development and operations to reduce risk, liability, and time-to-market, while increasing operational awareness. Long before DevOps was a word, though, its growth could be tracked in the automation tooling, culture shifts, and iterative development models (such as Agile) that have been emerging since the early 1970s. While its community-driven evolution has given DevOps strength by infusing it with ideas from many corners of the software development world, it has also hindered the movement by not providing the community with a central set of operational guidelines. Often, a company attempting to adopt DevOps will be doing so against the current of operational red tape and culture of silos. This transition is not easy for companies that have built their enterprise (and their employees’ expectations) on a foundation of "un-DevOps." Moreover, once the decision has been made and a group has the freedom to attempt implementation (which is often its own challenge), the group is faced with the problem of how to implement it properly. As we’ll discuss below, DevOps adoption is not a one-step process, and it can certainly be done incorrectly (or not at all). An attempt at correctness can be found in the scientific method, with the ability to measure, test, analyze, and repeat DevOps decisions and outcomes. While many leaders in DevOps talk about what needs to be done, there have not been enough eyes and ears tasked with objectively and measurably observing change as a result of implementing DevOps. This gap is not to say that DevOps does not prescribe monitoring and measuring. In fact, monitoring and measuring is a primary objective in some DevOps circles. The purpose of this monitoring, however, is to compare the state of a project now to that same project last week (or in another sense, to alert the team that the servers are down). This perspective is great when you need to see how well a project is progressing, but fails miserably when you need to answer the question "How far along are we on the road to DevOps implementation?" Studies of DevOps adoption rates use the phrases "have adopted" or "will adopt," as though they are line items on an organization’s quarterly goals and objectives. Does that mean they have achieved Flickr’s 10 deployments a day, or do they use the word adopt in a softer connotation, where they have simply accepted their fate, and will now begin listening to DevOps philosophy? Given the many definitions DevOps carries, the word adopt has at least that many variations in meaning and probably more. In any case, DevOps is not a one or a zero, but a continuum of positive and negative attributes, and far from linear. I’m not going to craft arbitrary milestones. In some teams, achieving any level of DevOps behavior is an accomplishment worthy of a catered lunch. But, to understand that DevOps is at once culture and technology goes a long way toward framing the goal. Another perspective is that your goal of DevOps adoption is what you need it to be. In other words, each organization has its own signature of pain points and struggles, and the vast array of solutions that DevOps offers is sure to provide a good start toward fixing them, even if just one or two are needed. It seems as though the DevOps movement is doing just fine without some dry, boiled-down set of standards and metrics. However, if we focus on making changes without measuring them we risk being on an endless road to gold plating our process. This outcome would be fine, except customers are also investing real money into these cultural overhauls, whether they know it, want to, or neither. Changes must be planned, with a clear goal and a target date. Because DevOps doesn’t come with an inclusive guidebook, identifying concrete goals and reasonable timelines can be hard. Seeing a report of, say, 400 percent decrease in release time or 8,000 percent increase in profits, can tempt organizational leaders to chase similar results. In reality, any positive result achieved from focusing on some aspect of DevOps will be proportional to the size or output of a business. While these kinds of measurements are quantifiable and objective, are they targeting the specific problems within an organization? If the current process isn’t noticeably damaging release time or profits, what is? Culture-related issues can be hard to identify, let alone quantify and measure. In many cases, providing channels for a team to report incidents in a genial manner can help to identify distinct properties of those incidents, such as severity (rating on a scale), what groups are involved, or the point during the development cycle at which it occurred? By identifying concrete metrics for a problem in this fashion, changes will become observable over time. Starting with the problem and designing a system to measure its change can be a far more effective strategy than jumping in headfirst to implement DevOps. A set of standards and metrics might even already exist in some sense, but the casual conference-goer might be led to think they don’t, due to how DevOps is often presented: as a patchwork of stories of individual experiences, do and don’t lists, and vendors hawking automation technology. Developers new to the idea go home refreshed, approaching the task with enthusiasm, but without the clipboard and analytical squint. This approach can be dangerous for businesses that take a real risk in initiating a culture shift and then find themselves without a quantifiable goal. It is important to be aware that we are missing the dense and tabular chart that would define specific and measurable attributes for degrees of DevOps adoption. Simply knowing that we should have reachable goals is not only logical, but also helpful in guiding change as it occurs within a software development and release team. Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series as well as suggestions for future content. Please leave feedback in the comments section below. Additional Resources To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits, please click here. To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here. To listen to the podcast DevOps—Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here. To read all of the blog posts in our DevOps series, please click here.
SEI   .   Blog   .   <span class='date ' tip=''><i class='icon-time'></i>&nbsp;Jul 27, 2015 01:14pm</span>
Displaying 29401 - 29424 of 43689 total records