Blogs
At the recent Civil Service Live events we had lots of people coming to our stand wanting to find out more about the DWP Digital Academy and how we are building digital capability within DWP.
In the spirit of "showing the thing" we have made a short film to give an insight into what’s happening at our Digital Academy and how we are building our in-house digital capability. If you are unable to view the film you can email me to request a transcript.
You can follow us on Twitter @DigitalDWP.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:55pm</span>
|
By Andrew P. Moore Lead Researcher CERT Insider Threat Team
Insider threat is the threat to organization’s critical assets posed by trusted individuals - including employees, contractors, and business partners - authorized to use the organization’s information technology systems. Insider threat programs within an organization help to manage the risks due to these threats through specific prevention, detection, and response practices and technologies. The National Industrial Security Program Operating Manual (NISPOM), which provides baseline standards for the protection of classified information, is considering proposed changes that would require contractors that engage with federal agencies, which process or access classified information, to establish insider threat programs. The proposed changes to the NISPOM were preceded by Executive Order 13587, Structural Reforms to Improve the Security of Classified Networks and the Responsible Sharing and Safeguarding of Classified Information. Signed by President Obama in September 2011, Executive Order 13587 requires federal agencies that operate or access classified computer networks to implement insider threat detection and prevention programs.
Since the passage of Executive Order 13587, the following key resources have been developed:
The National Insider Threat Task Force developed minimum standards for implementing insider threat programs. These standards include a set of questions to help organizations conduct insider threat self-assessments.
The Intelligence and National Security Alliance conducted research to determine the capabilities of existing insider threat programs
The Intelligence Community Analyst-Private Sector Partnership Program developed a roadmap for insider threat programs.
CERT’s insider threat program training and certificate programs are based on the above resources as well as CERT’s own Insider Threat Workshop, common sense guidelines for mitigating insider threats, and in-depth experience and insights from helping organizations establish computer security incident response teams. As described in this blog post, researchers from the Insider Threat Center at the Carnegie Mellon University Software Engineering Institute are also developing an approach based on organizational patterns to help agencies and contractors systematically improve the capability of insider threat programs to protect against and mitigate attacks.
A Pattern-based Approach to Insider Threat
This post is the latest installment in an ongoing series describing our research to create and validate an insider threat mitigation pattern language to help organizations prevent, detect, and respond to insider threats. As described in a previous post, our research is based upon our database of more than 700 insider threat cases and interviews with the United States Secret Service, victims’ organizations, and convicted felons. From that database, we identified 26 patterns that capture reusable solutions to recurring problems associated with insider threat. Insider threat mitigation patterns are organizational patterns that involve the full scope of enterprise architecture concerns, including people, processes, technology, and facilities. This broad scope is necessary because insiders often have authorized access—both online and physical—to organizational systems. Our approach acknowledges inter-relationships between organizational structures, such as policy, training, and employee and policy agreements, and draws upon those inter-relationships to describe the patterns themselves.
The following is a high-level outline of a pattern for disabling access after an insider leaves an organization for other employment, an older version of which was published at the 2013 PLOP Workshop:
Title: Eliminate Methods of Access after DepartureIntent: To avoid insider theft of information or sabotage of information technology after departureContext: An insider is departing an organization for employment elsewhere and you have a comprehensive record of access paths the insider has for accessing the organization’s systemsProblem: Insiders who depart an organization under problematic circumstances may become angry to the point of wanting to steal information from the organization or compromise the integrity of the organization’s information or information systems. Active access paths into the organization’s systems after departure provide the opportunity to do that.Solution: Disable accounts that you know about upon departure, and prepare to monitor suspicious remote access after departure for signs of unauthorized access attemptsRelated Patterns: Monitor Activity after Departure
For organizations and agencies establishing insider threat programs, our approach specifies
what processes are important and stresses the need for consistent enforcement
what policies are important
how those processes and policies are implemented both by humans and technology
what technology is needed to support all of that
There will undoubtedly be great variation in insider threat programs, depending on the risks faced by individual organizations. We therefore use capability development scenarios to designate paths through the mitigation pattern language with the goal of mitigating a specific insider threat behavior. The mitigation pattern outlined above will be used in a capability development scenario described below. Such capability development scenarios serve to guide insider threat program designers as they try to ensure their programs are resilient against insider threats to their critical assets.
An Example Capability Development Scenario
In a forthcoming report on this topic, we will outline several capability development scenarios (CDSs). One scenario involves mitigating theft of intellectual property when an employee resigns or is fired from the organization:
Through our analysis of our insider threat database, we observed that 70 percent of insiders who stole intellectual property from an employer did so within 60 days of their termination from an organization. This CDS urges that both parties must agree at employee hiring regarding the ownership of intellectual property as well as the consequences if the agreement is breached. Upon termination, whether voluntary or forced, the organization should disable insider’s accesses. During the exit interview, the organization must review existing agreements regarding IP.
The CDS advocates that an employer monitor insider actions 60 days prior to termination and for 60 days after termination. Suspicious behaviors including uncharacteristically large downloads of intellectual property should be handled either by the human resources or legal departments or a combination of both.
As specified by the associated path through the mitigation pattern language, this CDS advocates that organizations
Screen Employees
Agree on IP Ownership
Periodically Raise Security Awareness
Log Employee Actions
Increase Monitoring Due to an Employee’s Pending Departure
Reconfirm Employee Agreements on Departure
Eliminate Methods of Access after Departure
Monitor Activity after Departure
In summary, mitigating theft of IP at departure involves ensuring that the organization increases their monitoring of any insider with access to critical assets for specific suspicious behaviors when the insider resigns or is terminated. In addition, the insider must agree to and be reminded that they can’t take organization-owned IP with them.
Future Work in Insider Threat
Continuing our efforts to help federal agencies and contractors develop insider threat programs, per executive order 13587, we are now seeking active government partners to apply and refine our approach. We also are continuing our research into fundamental patterns of insider threat mitigation to make sure that they remain well grounded and validated scientifically.
Looking ahead, we plan next to investigate insider social networks and the role they play in contributing to insider threat. In particular, we plan to examine how those social networks change over time to determine whether we can distinguish the social networks of malicious and non-malicious insiders. As part of this research, we are collaborating with Dr. Kathleen Carley, a professor at Carnegie Mellon University’s Institute for Software Research in the School of Computer Science.
We welcome your feedback on our work in the comments section below.
Additional Resources
To read the about insider threat mitigation patterns published at PLoP, please visit http://www.hillside.net/plop/2013/papers/Group4/plop13_preprint_47.pdf.
To read the PLoP Conference paper, Building a Multidimensional Pattern Language for Insider Threats, please visit: http://www.hillside.net/plop/2012/papers/Group%202%20-%20Rattlesnake/Building%20a%20Multidimensional%20Pattern%20Language%20for%20Insider%20Threats.pdf.
To read the SEI technical report, Justification of a Pattern for Detecting Intellectual Property Theft by Departing Insiders, please visithttp://www.sei.cmu.edu/reports/13tn013.pdf.
For more information about the CERT Insider Threat Center, please visithttp://www.cert.org/insider-threat/.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:55pm</span>
|
Ben Holliday - User Researcher
Back in December, Amy wrote on the GDS design notes blog about start pages within guides:
Lots of users navigate to start pages for services through content on GOV.UK, which means start pages repeat information they may already have read
Since then we’ve been working on improving the Carer’s Allowance content on GOV.UK - testing different design iterations in user research sessions.
What we discovered in research
In our user research, we discovered 2 clear user needs for the Carer’s Allowance guide on GOV.UK. Users want to know:
what they’re entitled to, and/or
where and how to apply
We found that the amount of content about Carer’s Allowance on GOV.UK can be overwhelming so many people just want to start an application.
We also found that most people have common questions about whether they, their partner, or the person they care for, will be better or worse off if they get Carer’s Allowance.
Some users need detailed information, but most want a service that "just tells me what I need to know". This is why people often prefer to speak to someone - they can get the information they need without having to read through everything.
Getting users to what they want quicker
We’ve now implemented the design for a single ‘make a claim’ page in the guide.
There’s no longer a separate start page so users can navigate more clearly to ‘Apply now’. It means they can quickly find out if they are entitled to Carer’s Allowance by answering the eligibility questions in the application and access a helpline if they still need to speak to someone.
The old Carer’s Allowance guide was more digital?by?preference, than default. The new ‘make a claim’ page has more emphasis on the digital service because it’s simpler, clearer and faster to apply online.
Improving content
After testing different approaches for ‘you will need’ we found that we should just tell people about anything that could block their progress. For example, you need your National Insurance number or you can’t complete the transaction. In contrast, we previously said ‘details of benefits received’ but you only need to know that benefits are being received - you don’t need the details to enter.
We also found that eligibility needs to be signposted, but it doesn’t need to be part of ‘make a claim’. If people just want to start an application they then answer the eligibility questions.
What’s happened
These changes have now been live on GOV.UK since May. We’ve seen a significant increase in traffic to the service and a 22% increase in applications made online. More people than ever before are using the online service.
Next steps
We’re still testing and learning about the content in ‘make a claim’. Most of our research is now with less confident users who need reassurance that completing an application online is quick, easy and secure. We’re thinking about:
Telling people that they don’t need a printer ? staff that speak to customers in the Carers Allowance Unit regularly hear that people expect to have to print to use the online service
Telling people how long it will take them to complete an application online (using live data)
Showing users the user satisfaction score for the Carer’s Allowance online claim service
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:54pm</span>
|
By Julien Delange Member of the Technical StaffSoftware Solutions Division
Given that up to 70 percent of system errors are introduced during the design phase, stakeholders need a modeling language that will ensure both requirements enforcement during the development process and the correct implementation of these requirements. Previous work demonstrates that using the Architecture Analysis & Design Language (AADL) early in the development process not only helps detect design errors before implementation, but also supports implementation efforts and produces high-quality code. Our latest blog posts and a recent webinar have shown how AADL can identify potential design errors and avoid propagating them through the development process. Verified specifications, however, are still implemented manually. This manual process is labor intensive and error prone, and it introduces errors that might break previously verified assumptions and requirements. For these reasons, code production should be automated to preserve system specifications throughout the development process. This blog post summarizes different perspectives on research related to code generation from architecture models.
An Approach to Improve Safety-Critical Process Development
At the ERTSS 2014 conference in February 2014, Jerome Hugues from Institut Supérieur de l’Aéronautique et de l’Espace (ISAE) and Matteo Bordin from AdaCore presented an approach that completely generates a system implementation from models. Their approach relies on various notations—including AADL and Simulink—to capture the different system aspects, such as architecture and behavior. This work is the result of a collaboration between a university (ISAE) and a company (AdaCore) that are both experts in the design and implementation of safety-critical systems. After the conference, I had the opportunity to discuss the project with Cyrille Comar, the cofounder of AdaCore, and Matteo Bordin, a model-based expert at AdaCore, and learn more about this work.
Their approach integrates two different modeling notations to capture system concerns:
The architecture is specified using AADL. This architecture defines the execution environment, software deployment, and configuration and includes the number of tasks, allocation to a processor, binding of a connection on a specific bus to transport data, and other specifications. Some people relate to this view as the so-called nonfunctional architecture (how the system provides its functions).
The behavior is specified using Simulink, which characterizes how the system processes and uses the data from its environment, for example, to compute new data or activate a device. Some relate to this view as the functional architecture (what functions the system provides).
These two notations are fully integrated: the architecture is the execution support for the behavior. To integrate behavior models, the architecture (the AADL model) contains components (subprograms) that reference the behavior and allocate the functional components into the execution platform.
Figure 1 - Code Generation Process from AADL
The ISAE-AdaCore team developed tools—including Ocarina and Project P—that generate and integrate code from AADL and Simulink. Once the tools have generated the code, compilers can link the code together without modification, creating the complete implementation and avoiding any errors related to manual code development. Such an approach removes an important error contributor (i.e., human developers) and is a major step toward improving safety-critical process development.
Why Automate Code Production?
This work relies on AADL because it provides the appropriate semantics to generate code, as Bordin, project manager at AdaCore, explained to me after the conference:
AADL has the main advantage of providing a blueprint representation of a runtime system. This allows translating models to source code without ambiguities and ensures that no semantics are lost during code generation. Of course, the availability of Ocarina was a major plus in choosing AADL.
During our discussion, Bordin also reported that the precise semantics of the language avoids the ambiguity of having multiple notations and diagrams for the same concept:
For this specific project, the only other off-the-shelf alternative would have been OMG MARTE. The main issue we have with MARTE is the ambiguity in representing runtime elements—they can be RtUnit, SwSchedulableResources, etc. Each MARTE tool requires one specific modeling pattern to work. Some tools may require composite structure diagrams, others may require activity diagrams, and others may require sequence diagrams. This means that it is difficult for application developers to easily adapt their modeling standard to tools. In addition, in MARTE it would have been necessary to define our own library of model elements to represent our core building blocks (Ada Ravenscar Tasks), while in AADL it is enough to specify precise properties to achieve the same result. Finally, we are not aware of MARTE-to-SPARK code generators, and we believe SPARK is a major element in our approach.
The development process described above relies on the architecture model as the central artifact for almost all activities (design, validation, analysis, and implementation). For that reason, the selected modeling language must support an accurate and precise semantics to avoid any ambiguity. This is why AADL is appropriate to fill this need.
The study by Bordin and Hugues also shows the relevance of model-based technology and especially AADL through a use case in a production project. A recent paper presented at the 35th International Conference on Software Engineering (ICSE) reported that more than half of developers do not have a real interest in modeling technology, mostly because of concerns with model consistency. For example, one architecture aspect (waiting time on a shared resource) can be represented using different patterns, each one having different and inconsistent characteristics (the timeout to wait for the resource). By using a single notation for driving different aspects of system development (such as validation and implementation), AADL addresses this issue and ensures system consistency (e.g., having the same timeout value) across the development process.
Toward an Optimized System Implementation
The research described in this post is a great step forward in the adoption of model-based tools in operational projects and shows the readiness of model-based methods to automate the implementation of safety-critical systems. By using a full model-based approach from system design to implementation, these tools ensure that validated requirements are preserved throughout the development process and then are correctly implemented.
In addition, system stakeholders can leverage model analysis techniques not only to detect potential architecture defects earlier in the design process but also to optimize the architecture and produce simpler, lighter, and more efficient implementations. In particular, adding a new step in the implementation process, as shown in the figure below, to optimize the architecture by removing useless components or refactoring their interactions would simplify the system implementation and ease its certification. As shown below, the initial model would be processed and automatically optimized by the code generator, creating a more efficient system implementation. For example, if two interdependent tasks are executed on the same processor, one potential optimization would consist of relocating them into one process, removing overhead resources and other related context-switch times during execution. Of course, optimizations would be relevant based on stakeholders’ requirements. For example, relocating both tasks within the same process might improve system performance but might be an issue from a security perspective if they contain data at different security levels.
Figure 2 - Optimized Architecture Generation
By using a high-level notation that focuses on a system’s key quality attributes (e.g., performance, safety, etc.), appropriate tools can analyze the system architecture and optimize it. To achieve such optimization, an accurate semantics—such as the one from AADL, with its specialized components types and properties—is a must. AADL provides the appropriate level of abstraction to simplify the system and eventually its implementation.
Conclusion
The research described above demonstrates that system implementation can be automatically generated from models. Although code generation techniques from models are not new (generation of code skeletons in Java from UML models has existed for several years), this new ISAE-AdaCore project automatically produces a complete system implementation, avoiding errors related to manual code production and potentially improving the certification process.
Model-driven engineering techniques not only help to implement a system but also automatically improve it. By analyzing the architecture, tools can optimize and simplify the system, making the resulting implementation lighter and easier to analyze and certify. Ongoing SEI research efforts will focus on such optimizations techniques to remove some of the usual system complexity and ease the verification and certification of system implementation.
Additional Resources
For more information about the Architecture Analysis & Design Language (AADL) please visithttp://www.aadl.info/aadl/currentsite/.
For more information about the Embedded Realtime Software and Systems Conference (ERTS), please visit http://www.erts2014.org/.
For more information about UML in Practice, please visithttp://oro.open.ac.uk/35805/.
To read the paper, System to Software Integrity: A Case Study, please visithttp://www.spark-2014.org/entries/detail/case-study-for-system-to-software-integrity-includes-spark-2014.
To view a recent webinar, Architecture Analysis with AADL, please visithttps://www.webcaster4.com/Webcast/Page/139/5357.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:54pm</span>
|
Starting Wednesday, May 13th, through Friday, May 15th, the SHRM Foundation is giving away books you'll enjoy reading next to the pool this summer. To be eligible to win, enter daily by completing these two steps below: 1. Share the title your favorite HR book2. Donate $35 or more to the SHRM Foundation online The SHRM Foundation will select five winners each day below who have submitted the most creative HR book title to win! Each day, we're giving...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:54pm</span>
|
By Aaron Ballman Senior Member of the Technical StaffCERT Secure Coding InitiativeWith the rise of multi-core processors, concurrency has become increasingly common. The broader use of concurrency, however, has been accompanied by new challenges for programmers, who struggle to avoid race conditions and other concurrent memory access hazards when writing multi-threaded programs. The problem with concurrency is that many programmers have been trained to think sequentially, so when multiple threads execute concurrently, they struggle to visualize those threads executing in parallel. When two threads attempt to access the same unprotected region of memory concurrently (one reading, one writing) logical inconsistencies can arise in the program, which can yield security concerns that are hard to detect. The ongoing struggle with concurrent threads of execution has introduced a whole class of concurrency-related issues, from race conditions to deadlock. Developers need help writing concurrent code correctly. This post, the second in a series on concurrency analysis introduces Clang Thread Safety Analysis, a tool that was developed as part of a collaboration between Google and the Secure Coding Initiative in the SEI's CERT Division. Clang Thread Safety Analysis uses annotations to declare and enforce thread safety policies in C and C++ programs.
Foundations of Our Work
Many programmers today typically take a lock-based approach to dealing with concurrency issues. Typically, the canonical lock-based approach involves locking a piece of memory to ensure only one thread at a time can access a given region of memory. Then, when that piece of memory no longer requires protection, it is unlocked. Attempts to access that memory by threads not holding the lock result in those threads blocking until the lock is released. There are certain classes of problems, however, where a lock-based approach does not make sense, including real-time systems and interactions between the graphical user interface (GUI) and synchronous resources, such as a database or the network.
Real-time systems typically avoid locks because with locked resources, the potential exists for threads to block while waiting for a lock to be released. If a critical thread is blocked (like the thrusters for a jet), the resulting behavior of the system could be disastrous. Likewise, is it often desirable to avoid using locks from the GUI thread. If the GUI thread is blocked, the user interface cannot be updated and no new user input can be accepted until the GUI thread is released from its blocking operation.
As detailed in our introductory blog post on this work, which was spearheaded by Dean Sutherland, our approach is predicated on thread usage policy (the subject of Sutherland’s doctoral thesis) to address the locking problem described above. That blog post defined thread usage policy as a group of often unspecified preconditions used to manage access to a shared state by regulating which specific threads are permitted to execute particular code segments or to particular data fields. Put another way, instead of locking regions of memory, a programmer specifies that threads have roles to fulfill. Roles are associated with methods. Specifically, a programmer declares that a particular method should only be called from a thread context that is explicitly holding or not holding a specific role. For example, the main thread in a program is typically used to run the GUI for the program, so a programmer could assign the main thread a "GUI" role. A worker thread could be spawned off to handle database access, and that thread could be assigned a "Database" role. Finally, the programmer can use an annotation that specifies "may only be called when the ‘Database’ role is held." If the programmer wrote code that would attempt to access the database by calling the "Database" annotated function from a GUI function, a diagnostic would be generated alerting the programmer of this constraint violation.
The concept of thread usage policy is not language specific; similar concepts exist in many programming languages including Java, C, C#, C++, Objective-C, and Ada. The initial post on this work described its application to Java. This post will describe my effort with Dean Sutherland, along with collaborator DeLesley Hutchins of Google, to take the thread usage policy initially applied in Java and transfer it to C and C++ using the Clang open source compiler.
Collaborating with Google
As our team of researchers began implementing our approach, we learned that a thread safety analysis based on locks had already been developed by Google, and deployed on a large scale within their internal code base. The Google code base makes heavy use of locks, and they have developed both static analysis tools, and dynamic analysis tools such as Thread Sanitizer, to help find and prevent race conditions.
Working with DeLesley Hutchins, we came to the conclusion that although locks and roles are orthogonal ways of ensuring thread safety, they can both be handled using the same underlying static analysis machinery. The primary difference between the two approaches lies in the terminology that programmers use to annotate their programs. When we began this collaboration, Google had already mandated that all programmers use lock-based analysis on every line of C++ code that is run within Google.
An Overview of Our Analysis Technique
Compilers with static analysis functionality, such as Clang, have helped developers by allowing threading policies to be formally specified and mechanically checked. Clang is a production quality, open source compiler for the C family of programming languages that builds on the LLVM optimizer and code generator. Clang also provides a sophisticated infrastructure for implementing warnings and static analysis. We selected Clang because it initially parses a C++ input file to an abstract syntax tree (AST), which is an accurate representation of the original source code, down to the location parentheses. The AST makes it easier to emit quality diagnostics, but complicates analysis in other respects.
As described in our paper, the Clang analysis infrastructure constructs a control flow graph (CFG) for each function in the AST. This transformation is not a lowering step; each statement in the CFG points back to the AST node that created it. We are then able to walk the CFG, building a contextual set of roles currently held or not held, and compare them against assumptions annotated in the source code to diagnose incorrect assumptions. Due to working within a higher-level abstraction layer, the diagnostics we report to the user closely map to their actual source code, but we are still capable of producing diagnostics for compiler-generated code, such as implicitly-defined constructors in C++.
You can annotate your source with thread roles for analysis with Clang by using the capability attributes provided by the compiler (the full list can be found in the Clang documentation). You start by defining role types using the capability or shared_capability attributes and pass "role" as the argument. This attribute appertains to a struct or typedef, which can then be used to declare unique roles for use within your source code. For example, if a programmer wanted to declare two thread roles, FlightControl and Logging, for a C program, they would be introduced as:
typedef int __attribute__((capability("role"))) ThreadRole;ThreadRole FlightControl, Logging;
These two distinct thread roles can then be used to identify those capabilities for use in the other capability attributes. Since thread roles do not define semantic functionality at runtime, the acquisition and forfeiture of a thread role capability is typically defined as a noop which does not require additional thread safety analysis checking, and is optimized away by the compiler, requiring no runtime overhead:
void acquire(ThreadRole R) __attribute__((acquire_capability(R))) __attribute__((no_thread_safety_analysis)) {}void release(ThreadRole R) __attribute__((release_capability(R))) __attribute__((no_thread_safety_analysis)) {}
These functions can then be used to acquire or release the given thread role. Once the acquire() function is called, the capability passed in to the function will then be held for the capability context of all subsequent function calls, until the release() function is called with that capability. For instance, the logging thread’s entry point may look like:
void *logging_entrypoint(void *arg) { void *ret; acquire(Logging); ret = logging_entrypoint_impl(arg); release(Logging); return ret;}
The thread entry point acquires the Logging role, calls the actual implementation of the logging thread with the Logging capability held, and then releases the Logging role before the thread completes execution. The FlightControl thread entry point would look similar, except it would acquire and release the FlightControl capability instead of the Logging capability.
At this point, it is now possible to usefully annotate functions as requiring either the Logging or the FlightControl capability. If these functions are called from a context where the capability set satisfies the requirements written on the function, no diagnostic is produced because the source code is logically consistent with the annotations. For instance, the following definition of the logging_entrypoint_impl() function demonstrates requiring the Logging capability in a well-formed manner:
extern void dispatch_log(const char *msg) __attribute__((requires_capability(Logger)));extern const char *deque_log_msg(void) __attribute__((requires_capability(Logger)));void *logging_entrypoint_impl(void *) __attribute__((requires_capability(Logger))) { const char *msg; while ((msg = deque_log_msg())) { dispatch_log(msg); } return 0;}
However, if a function is called from a context where the capability set does not satisfy its requirements, a diagnostic is produced at compile time. Consider this definition of the flight_control_entrypoint_impl() function:
void *flight_control_entrypoint_impl(void *) __attribute__((requires_capability(FlightControl))) { dispatch_log("Flight Control Started"); /* Should diagnose an error */ /* … */ return 0;}
In this example, flight_control_entrypoint_impl() requires that the FlightControl capability be held, which is successful due to the implied implementation of the thread entrypoint acquiring the FlightControl role. However, the call to dispatch_log() requires the Logging capability be held, which it is not from within this call graph, and so a diagnostic is issued.
A Calculus of Capabilities
As described in our recently published paper on this work, C/C++ Thread Safety Analysis, Clang’s thread safety analysis is based on a calculus of capabilities. To read or write to a particular location in memory, a thread must have the capability, or permission, to do so. A capability can be thought of as an unforgeable key or token, which the thread must present to perform the read or write. Capabilities can take one of two forms:
A unique capability cannot be copied, so only one thread can hold the capability at any one time.
A shared capability may have multiple copies that are shared among multiple threads. Uniqueness is enforced by a linear type system.
The analysis enforces a single-writer/multiple-reader discipline. Writing to a guarded location requires a unique capability. Likewise, reading from a guarded location requires either a unique or shared capability. In other words, many threads can read from a location at the same time because they can share the capability, but only one thread can write to it. Moreover, a thread cannot write to a memory location at the same time that another thread is reading from it because a capability cannot be both shared and unique at the same time.
This discipline ensures that programs are free of data races, which are a situation that occurs when multiple threads attempt to access the same location in memory at the same time, and at least one of the accesses is a write. Since write operations require a unique capability, no other thread can access the memory location at that time.
Wrapping Up
The destructive nature of race conditions means that many organizations use both static analysis and dynamic analysis in multi-threaded programs, similar to Google. While these tools complement each other, dynamic analysis operates without annotations and thus can be applied more widely. Dynamic analysis, however, can only detect race conditions in the subset of program executions that occur in test code. Static analysis has proved to be less flexible, but covers all possible program executions. Static analysis also reports errors earlier, i.e., at compile time.We encourage readers to use these annotations by downloading the latest version of Clang (3.5) and trying them out. Please send us feedback on your experiences as well as feedback on the research described above in the comments section below.
Additional Resources
To download the paper, C/C++ Thread Safety Analysis, please visithttp://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/42958.pdf.
To read the paper, Composable Thread Coloring (which was an earlier name for the technique we now call thread role analysis) by Dean Sutherland and Bill Scherlis, please go towww.fluid.cs.cmu.edu:8080/Fluid/fluid-publications/p233-sutherland.pdf.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:53pm</span>
|
On May 13, @SHRMnextchat chatted with Sharlyn Lauby (@Sharlyn_Lauby) about Too Many Meetings in the workplace. In case you missed this excellent chat with great tips and advice, you can see all of the tweets here: [View the story "#Nextchat RECAP: Too Many Meetings!" on Storify] ...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:52pm</span>
|
By Arie GurfinkelSenior Member of the Technical Staff Software Solutions Division
When we verify a software program, we increase our confidence in its trustworthiness. We can be confident that the program will behave as it should and meet the requirements it was designed to fulfill. Verification is an ongoing process because software continuously undergoes change. While software is being created, developers upgrade and patch it, add new features, and fix known bugs. When software is being compiled, it evolves from program language statements to executable code. Even during runtime, software is transformed by just-in-time compilation. Following every such transformation, we need assurance that the change has not altered program behavior in some unintended way and that important correctness and security properties are preserved. The need to re-verify a program after every change presents a major challenge to practitioners—one that is central to our research. This blog post describes solutions that we are exploring to address that challenge and to raise the level of trust that verification provides.
As we strive to ease the burden of effort surrounding verification for practitioners, we attempt to answer this question:
How can we ensure that the amount of verification work is proportional to the size of the change, as opposed to the size of the system?
It is possible for even a small change to a program to trigger a complete re-verification of it.
In our project, we study this problem of evolving verification in the context of a compiler. Two standard tools play a role in the process we describe here. The first is the compiler, which takes the source code of a program entered by the programmer and transforms it into an executable. The second is a verifier that takes a program and a property and determines whether the program satisfies the property. Figure 1 below shows a typical interaction of the compiler and the verifier. Both take the source code of a program in and manipulate it.
While to the user, the compiler is a black box that takes source code and produces binaries, internally it is structured as a series of transformations. Each transformation makes a small refinement to the program at each pass. This optimizes the program—that is, effects some improvement. For example, compilation optimizations might eliminate useless code or enable the program to use fewer resources. This iterative improvement is comparable to what happens to a program as it evolves over time. Thus, compilation provides a well-defined software evolution that we have chosen to study in our research.
One of the main hazards to verification results is that the compiler and verifier might use incompatible interpretations (semantics) of the source code. We refer to this hazard as the semantics gap. The semantics gap is as a major hazard to the validity of formal verification, and mitigating this hazard is one of our research objectives.
Towards our goals of overcoming the semantics gap, expanding the information provided to the practitioner, and automating the verification process, we are developing a certified compiler, which tightly integrates the compilation and verification tool chains. When verification is successful, this compiler provides a certificate—documenting a rigorous formal proof that shows why the software will behave as expected regarding a desired condition. The certificate provides an extra layer of trust to the user.
One major advantage of our approach is that it does not involve building the system from scratch. Instead, we have found a way to combine an existing compiler and verifier. We use an LLVM compiler, which is designed as a set of reusable libraries with well-defined interfaces and is written in C++. Our process also involves an intermediate representation in LLVM bitcode. It is this intermediate representation that allows the compiler and verifier to overcome their semantic differences and "talk to each other." Our verifier is UFO, which we have recently pioneered; it is a framework and tool for verifying (and finding bugs in) sequential C programs. UFO is a new brand of verifier and a major reason we are hopeful for success in our endeavor. As the winner in four categories of the 2013 Software Verification Competition, it currently represents the state-of-the-art in software verification.
Figure 2 below represents the architecture of our proposed certified compiler. Key to the process is the common front end that translates the input program into a formally defined intermediate representation. As the figure shows, that representation is simplified for verification, then proceeds on to be verified. If an error is detected, it is communicated to the user through an error report. Otherwise, a certificate is generated, and the compilation tool-chain takes over.
The compiler first embeds the verification certificate into the program. When the program is compiled, the resulting executable is validated against this embedded certificate. If the validation passes, the certificate is removed, and the executable is returned to the user. The compiler and the verifier work on the same formal intermediate representation, so there is no semantic gap.
Impact
We anticipate that the certifier+compiler will provide significant benefit because it
presents the user with more understandable verification messages
allows for validation of correctness of compiler optimization
allows for producing self-certifiable executable code
Our verifier+compiler architecture will simplify the verification effort considerably, as it automatically propagates verification results across the compilation tool chain. This propagation enables safe use of optimization in safety-critical systems and will have widespread impact. If successful, our toolset will enable many new applications of verification techniques in safety- and mission-critical environments such as defense and commercial aviation domains.
The verifier+compiler will also significantly speed system development and make it more efficient. Moreover, the optimized code better utilizes available resources and so will aid in fulfilling resource requirements. Development cost will decrease because automatic optimization is cheaper than the manual optimization that is currently performed. Verification costs will also decrease; for several reasons, code manually optimized for resource usage is much harder to verify than unoptimized code. For example, the (manual) optimization process often trims the code, which makes it less complete and harder to verify. In addition, many compilers have been designed to generate code that runs quickly, rather than code that can be verified quickly. Automated verification that occurs before automatic optimization resolves these issues.
Looking well into the future, we see possibilities for exciting applications of our research. If we were able to achieve optimal speed and efficiency for the verifier+compiler, we might apply it, for example, to an autonomous system. Such a system uses sensors to determine what is happening in the environment and changes its behavior accordingly. For example, it could rewrite its own program as it runs. With sufficient runtime performance, our verifier+compiler could verify the system as it runs and evolves.
Qualifying Tools
Other new applications of verification techniques in safety- and mission-critical setting could involve qualification of tools. The DoD and related government agencies require that safety-critical systems meet certain standards, for example, as described in DO-178 B, which is guidance used to determine whether the software will perform adequately in a safety-critical airborne environment. When the system satisfactorily meets these standards, the agency declares it certified (not to be confused with software certified by the verifier as described above). Government agency certification also requires that tools used to build safety-critical systems be qualified: tools used to develop or verify safety-critical software must not introduce errors or fail to detect errors, respectively. So, to build a certifiable avionics system, for example, you must use tools that you can demonstrate to be qualified. For system analysis tools, the new verifier+compiler architecture would make such qualification easier than it is currently. In particular, closing the semantics gap with the new architecture would have a significant impact both on how the development tools are qualified and how compilers are used in safety-critical domains.
Conclusion
If successful, we believe that our research will have a broad impact and numerous applications, including improving scalability of automated verification, simplifying software certification, and enabling novel architectures for adaptive and/or autonomous systems that are re-verified on-the-fly. We also expect that the prototype currently under development will be valuable towards demonstrating the technology to stakeholders and find new applications in safety-critical domain.
We welcome your feedback in our work in the comments section below.
Additional Resources
To read the paper, UFO: Verification with Interpolants and Abstract Interpretation (Competition Contribution) by Aws Albarghouthi, Arie Gurfinkel, Yi Li, Sagar Chaki, and Marsha, Chechik, please visit http://link.springer.com/chapter/10.1007%2F978-3-642-36742-7_52.
To read the paper, Synthesizing Safe Bit-Precise Invariants, by Arie Gurfinkel, Anton Belov, and João Marques-Silva, please visithttp://anton.belov-mcdowell.com/baker/media/papers/gurfinkel_belov_marques-silva--tacas14.pdf.
To read the paper FrankenBit: Bit-Precise Verification with Many Bits - (Competition Contribution) by Arie Gurfinkel and Anton Belov, please visithttp://anton.belov-mcdowell.com/baker/media/papers/gurfnikel_belov--tacas14.pdf.
To read the paper, Incremental Verification of Compiler Optimizations, by Grigory Fedyukovich, Arie Gurfinkel, and Natasha Sharygina, please visithttp://verify.inf.usi.ch/sites/default/files/nfm2014.pdf.
To view the presentation, Trust in Formal Methods Toolchains, please visithttp://arieg.bitbucket.org/pdf/2013-07-14-VeriSure.pdf.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:52pm</span>
|
By Derrick H. KarimiMember of the Technical Staff Emerging Technology Center
This blog post is co-authored by Eric Werner.
In an era of sequestration and austerity, the federal government is seeking software reuse strategies that will allow them to move away from stove-piped development toward open, reusable architectures. The government is also motivated to explore reusable architectures for purposes beyond fiscal constraints: to leverage existing technology, curtail wasted effort, and increase capabilities rather than reinventing them. An open architecture in a software system adopts open standards that support a modular, loosely coupled, and highly cohesive system structure that includes the publication of key interfaces within the system and full design disclosure. One area where the Department of Defense (DoD) is concentrating on the development of service-oriented architectures and common technical frameworks is in the intelligence community, specifically the Defense Intelligence Information Enterprise (DI2E). As this blog post details, a team of researchers at the SEI Emerging Technology Center (ETC) and the Secure Coding Initiative in the SEI’s CERT Division, are working to help the government navigate these challenges in building the DI2E framework, which promotes reuse in building defense intelligence systems.
Foundations of Our Work
Our work focused on development of a framework for DI2E, the non-command-and-control (C&C) part of the Distributed Common Ground System (DCGS) and the Combat Support Agencies (CSAs). The DI2E Framework provides the building blocks for the Defense Intelligence Community to more efficiently, effectively, and securely develop, deliver, and interface their mission architectures. The core building blocks of the DI2E framework are components that satisfy standards and specifications, including web service specifications that enable a stable but agile enterprise supporting rapid technology insertion.
The key objective of the DI2E Framework is to increase operational effectiveness, agility, interoperability, and cybersecurity while reducing costs. The framework consists of a reference implementation (RI), a test bed, and a storefront. When completed, the DI2E will provide a fully integrated, cross-domain, globally-connected, all-source intelligence enterprise that comprises the federated intelligence mission architectures of the military services: CSAs, Combatant Commands (CCMDs), Intelligence Community (IC), and international partners.
The DI2E provides functionality that
transforms information collected for intelligence needs into forms suitable for further analysis and action
provides the ability to integrate, evaluate, interpret, and predict current and future operations or physical environments
provides the ability to present, distribute, or make available intelligence, information and environmental content, and products that provide better situational awareness to military and national decision makers
The vision of the founders of the DI2E Framework Testbed is to use a distributed (interagency) development paradigm to implement a software repository focused on componentized reuse, enabled by an open architecture and systematic conformance testing of components’ interfaces to specifications allowed in the architecture.
Our work on this project—the team included Shelly Barker, Ryan Casey, David Shepard, Robert Seacord, Daniel Plakosh, and David Svoboda, in addition to Eric and myself—spans two fronts:
We participate in a center of excellence (COE) that consists of universities and labs working with the government to execute DI2E framework processes. Our work focuses on helping the DoD develop the framework by providing feedback to the DI2E Program Management Office about processes and practices of the framework. When completed, the DI2E framework will comprise the architecture, standards, specifications, reference implementations, components, component storefront, compliance certification, and testing, as well as the configuration management and other governance processes necessary to realize the aforementioned objectives.
On a second front, we are evaluating specific components to be included in the software reuse initiative. The evaluated software is presented in a storefront of software components that can be reused when the defense intelligence community is building other systems.
Open Architecture Approach
As part of its approach, the government intends to reuse existing components of the DI2E enterprise, with the goal of taking advantage of free and open-source software, government-off-the-shelf software (GOTS), and commercial off-the-shelf (COTS) software.
Our team of researchers participates in the framework development by contributing to the design of the software component evaluation and developing software tools to support the evaluation process. These tools provide task automation and consistent evaluations across the distributed COE network of universities and labs. Our main focus, however, is on performing the software evaluations that are necessary to ensure quality reusable components are recommended for reuse.
Evaluating Components for Reusability
When a new or existing government program defines a need—be it a map, login service, or some other kind of widget to build out part of its system and fulfill some requirement—the program ideally will not have to build the system entirely from scratch. Through software reuse, the program should be able to easily identify and examine components that have already been evaluated, embedded, and tested. Our role involves evaluating and testing the software components that will be housed in the DI2E storefront for programs to view and ideally reuse.
When we first began work on the DI2E Framework in 2013, together with the other COE labs, we focused on building up and designing an evaluation framework. We began in an Agile fashion, building on a checklist that now contains approximately 70 questions asking for judgments and measures of different aspects of the software including
How easy is it to find the software?
How easy is it to install the software?
How complete is the testing?
Is there a community that supports this?
Can you go online and easily find information about it?
Next, we used the checklist to answer these questions for each piece of software that we evaluated. We tracked down information by examining and using the software and also reading and evaluating the documentation. To answer the evaluation questions that the user experience or documentation did not address, we asked the software developers to answer such questions as
What development process do you use?
Do you use bug tracking?
Do you have a checklist for release?
What is your approach to testing?
Are you measuring unit test coverage?
In addition to the checklist of questions, our team generated other prose documents, including installation and integration how-tos. For more mature software components, these documents point back to the software component’s documentation. While the checklist guides the evaluation, the prose sections capture more detailed information. The completed checklist provides a naturally indexed and self-contained summary of how applicable a piece of software is to the DI2E.
The prose documentation details the software evaluation, presenting the evidence used to justify the abbreviated checklist answers. Additional prose documents provide architectural details relevant to the DI2E, such as whether support is provided for deployment dependencies, data formats, and interfaces. This information can rapidly inform programs of record about the suitability of reusing a component in their system. The documented and validated data formats and interfaces will allow users to rapidly design a system from compatible components with a high level of assurance that the design is valid.
Our Evaluations
Our work on the DI2E framework also included software component evaluations that align with the ETC’s areas of expertise in data-intensive scalable computing. As of July 2014, we have evaluated assets that cover the following functional requirements:
data-content discovery
data mediation
data-handling
widget framework
One of the software component evaluation documents maps the component’s features to the services that the intelligence community is seeking. For example, if an agency has already identified that multiple source query capability is critical for its software, we have indexed existing software components with these services so that they may be easily identified.
Collaborations
At a higher level, our evaluation of the software components focuses on reusability, but security remains an underlying and important concern for every evaluation. One aspect of our security evaluation involves code analysis. For that aspect of our work, we are working with researchers in the CERT Secure Coding Initiative, who maintain a laboratory environment for static analysis. The Source Code Analysis Laboratory (SCALe) consists of commercial, open source, and experimental tools that are used to analyze various code bases, including those from the DoD, energy delivery systems, medical devices, and more. Using SCALe, source code auditors then identify violations of the published CERT Secure Coding rules.
In the Cloud
Given the federal government’s embrace of cloud computing, it is important to note that DI2E is set up as a private cloud environment. The DI2E cloud offers Infrastructure as a Service (IaaS), where testing machines can be provisioned, and Software as a Service (SaaS), where common developer tools are available for use. Working in the DI2E cloud enabled us to have on-demand access to infrastructure machines to test different software components.
Working in the cloud also allows us to address the "it works on my machine" problem, which my colleague Aaron Cois detailed in a recent blog post. This phrase describes a common problem in which developers, often early in their career, write software code to address a problem. After testing the code and finding that it works on their machine, the developers deploy it to customers where it may fail to work because of differences in system configuration. One positive aspect of working in the cloud is that the common environment allows configurations of systems used by collaborating organizations to be more homogenous. The configuration management systems exposed to cloud instances by the cloud administrators can enforce consistency that aids in component integration.
PlugFest and Future Work
Our research on DI2E aligns with ETC’s mission, which is to promote government awareness and knowledge of emerging technologies and their application and to shape and leverage academic and industrial research. There is considerable need for this type of research since "the practice of reuse has not proven to be … simple however, and there are many misconceptions about how to implement and gain benefit from software reuse," as Raman Keswani, Salil Joshi, and Aman Jatain write in a paper presented at the 2014 Fourth International Conference on Advanced Computing & Communication Technologies. Our work also leverages various SEI skillsets such as hands-on evaluation, construction of frameworks, and data processing.
My colleague Dan Plakosh and I also attended DI2E PlugFest, an annual demonstration of the DI2E framework. The Plugfest eXchange provided an environment of networked, interoperable, and reusable components where vendors deployed and showed their tools for providing flexible, agile, and data-driven capabilities to warfighters. At PlugFest, we were able to see first-hand which vendors were able to align their software with the ideals of the DI2E framework.
We welcome your feedback on our work in the comments section below.
Additional Resources
For more information about SEI Emerging Technology Center, please visit http://www.sei.cmu.edu/about/organization/etc/.
Editor’s Note: The appearance of external hyperlinks does not constitute endorsement by the United States Department of Defense (DoD) of the linked websites, or the information, products or services contained therein. The DoD does not exercise any editorial, security, or other control over the information you may find at these locations.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:50pm</span>
|
I LOVE innovation because of all those fantastic things that can really improve our lives. As a traveller, my smartphone brings me internet, email, calendar, photos and best of all face to face contact with my nearest and dearest. These digital advances make it possible for me to stay close to those I love, get where I’m supposed to be on time and listen to the latest tunes. Ok, so maybe not real life changers like medical breakthroughs but these advances are designed to make life better. I’m not suggesting all digital is good, but when it’s done well it can be amazing!
What is Digital?
Like many of those interested in digital, I found it difficult to find the definition. Life is easier if things are defined, we don’t need to think, we simply reach a common understanding. There’s no formal definition but I think that there are some common characteristics of a digital service which I have found to be a helpful guide:
Designed for users
Better service produced for citizens
Incrementally built, tested and continuously improved
Development teams learn as they go, by doing.
In DWP it’s far from clear-cut. There are lots of differences in digital delivery compared with all the other delivery methodologies such as PRINCE - they all have a place and there are pros and cons with each.
My secret learning so far - and I’m not sure if it’s right - is that it’s the people and their attitude that’s making the biggest difference!
So how do we deliver a Digital Project?
What goes on behind the scenes then? I’m trying to explode the myths behind digital development. One analogy that I’m going to share with you is ‘you do not need to know how to programme a phone to use one!’ and that’s exactly the point!
For me it’s about designing the best you can for your users. You may have heard the terms agile or scrum to describe the ways of working? What are they?
Put simply, work is broken down into manageable chunks, followed by prioritising a list of what to do first. Then you have a ‘sprint’. The sprints usually last a couple of weeks with the intention to deliver the task list, also known as a ‘backlog’. Progress is checked daily and any blockers preventing you from progressing are identified and removed. If you cannot remove the blockers move on and flag them to someone who can fix it.
To summarise
There’s more than one definition of ‘Digital’
We put users needs at the heart of our design
We build it incrementally, try it out, learn then do it all again!
Making it happen is about the right people, well led and empowered to deliver at pace.
Discover Discovery
I’m now in the room at DWP’s new Transformation Hub in Leeds. I’m super proud to be part of a multifunctional team of user researchers, product owner, business analysts and developers.
Secure Communications Project team
This discovery is about Secure Communications so government can communicate digitally, with claimants and others. There are huge benefits for all if we land this well! But we can’t boil the ocean so we have picked a small area to focus on. For this discovery that’s Personal Independence Payment for those that are terminally ill. Our users are claimants, doctors, Macmillan and other support functions. I don’t think I’ve been more emotionally connected between the Department’s business and our customer.
We will be blogging about our experiences as we move through discovery over the coming weeks so please watch this space.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:50pm</span>
|
Q: I’m at a loss on how to deal with a recent hire. He’s very eager to prove himself and do well, but instead of learning his job -which involves very specific functions, procedures and deadlines- he spends time trying to find efficiencies in other areas and coming up with improvement ideas unrelated to the job. Consequently, he’s not up to speed. I DO like employees who show initiative, but I also need him to learn his job. So what’s the best way to let him know he needs to concentrate on the job first,...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:50pm</span>
|
By C. Aaron Cois Software Engineering Team Lead CERT Cyber Security Solutions Directorate
DevOps is a software development approach that brings development and operations staff (IT) together. The approach unites previously siloed organizations that tend to cooperate only when their interests converge, resulting in an inefficient and expensive struggle to release a product. DevOps is exactly what the founders of the Agile Manifesto envisioned: a nimble, streamlined process for developing and deploying software while continuously integrating feedback and new requirements. Since 2011, the number of organizations adopting DevOps has increased by 26 percent. According to recent research, those organizations adopting DevOps ship code 30 times faster. Despite its obvious benefits, I still encounter many organizations that hesitate to embrace DevOps. In this blog post, I am introducing a new series that will offer weekly guidelines and practical advice to organizations seeking to adopt the DevOps approach.
My Background
As a federally-funded research and development center (FFRDC), the SEI must maintain high standards of efficiency, security, and functionality. At the SEI, I oversee a software engineering team that works within CERT’s Cyber Security Solutions Directorate. My team develops tools and technologies to help federal agencies address cybersecurity risks, manage secure systems, and investigate increasingly complex cyber attacks and crimes. To fulfill these responsibilities, we develop many increasingly complex software applications, and DevOps has become a necessary, defining factor in our software development process.
Our role in helping federal agencies assess cybersecurity risks heavily influences our approach to DevOps, necessitating that we weave security considerations into every facet of our software development lifecycle.
Cybersecurity is often misunderstood or even ignored as new systems are designed and developed, falling out of view to more high profile quality requirements, such as availability or correctness of software systems. Due to CERT’s responsibility to our sponsors and the community, security is consistently a first-tier concern, addressed as an early and fundamental requirement for any system developed by our team. This focus has precipitated our research into Secure DevOps, or DevOpsSec, a topic we will revisit often in this blog series.
Origins and Benefits of DevOps
DevOps emerged in 2009 when a group of Belgian developers hosted DevOps Days, which advocated collaboration between developers and operational staff. Since then, organizations have rapidly adopted DevOps. In their 2014 State of DevOps report, Puppet Labs found DevOps adopters to be "deploying code 30 times more frequently with 50 percent fewer failures." In addition, the more than 9,000 people who completed the Puppet Labs survey reported the following:
Firms with high-performing IT organizations were twice as likely to exceed their profitability, market share, and productivity goals.
IT performance strongly correlates with well-known DevOps practices, such as use of version control and continuous delivery.
Organizational culture is one of the strongest predictors of both IT performance and overall performance of the organization.
For more on the origins of DevOps, see my post, An Introduction to DevOps.
Addressing Challenges to DevOps AdoptionBefore an organization can consider adopting DevOps, it needs to shift the prevailing mindset and culture and gain a better understanding of how DevOps works. In my experience, some barriers to adoption are technical, and a number are cultural. The practical advice and suggestions that we will publish every Thursday will focus on three core areas of DevOps:
collaboration and cooperative culture
infrastructure as code
automation and repetition
The following are some of the specific challenges that I will address in the subsequent weeks:
Continuous integration: What build server should I choose? How do I know what processes to automate? Who manages build configurations? There are many questions involved in implementing robust continuous integration in the enterprise. In this series we will cover many common issues and some advanced topics to get your organization on a successful path.
Continuous deployment: This concept terrifies many organizations, but it doesn’t have to. There are many paths to continuous deployment, and many ways to implement the technology in a way that maintains stability and assurance in your delivered products. Stay tuned for more.
Fear of automation: DevOps provides a means for automating repetitive tasks within the SDLC, allowing engineers to focus on the important task of writing code. However, the fear of automated tools and the technical expertise needed to use them, especially in legacy systems, is pervasive. We’ll talk about what tasks to automate, when to automate, and the cost and benefits of automation.
Incomplete implementation: Agilists often encounter organizations that embrace Agile development language but ignore fundamental concepts and behaviors. This can result in a watered down process of writing code very fast without documentation. This is not Agile, this is irresponsible. In DevOps, I have witnessed the same problem. For instance, an organization may think it embraces DevOps, but it may not have any operations staff on project teams. This is not DevOps.
Breaking down the silos: Altering organizational culture to enable developers and operations engineers to fully collaborate on a project is trickier than it sounds. We’ll discuss a number of issues and tactics for shaping organizational culture and thinking to achieve your goal of functional DevOps.
Tailoring DevOps: There are many ways to do DevOps. It is important to note that different teams and projects may structure DevOps practices differently, depending on their needs. I, along with several members of my team, will present tactics, case studies, and alternatives throughout this series.
DevOpsSec: Most software teams believe in secure software, but are unsure how to structure their process to produce verifiably, consistently secure code. We will present tools, techniques, and practices to help you increase your software security through DevOps.
Infrastructure as code: In addition to writing code for an application, software development teams practicing DevOps develop code to define their infrastructure. There are many advantages, and many pitfalls, to automated environment provisioning, and it will be a frequent topic in this series.
Automation & repetition: In addition to being a significant time-saver, automation and repetition of complex tasks can give a team extreme confidence in their ability to perform these tasks when it counts. But what steps should be automated? What tools are best? We’ll discuss a variety of topics around DevOps automation throughout this series.
Looking Ahead
While I will use this series will provide weekly guidelines and advice on DevOps adoption, I will continue to publish more in-depth posts that take a deeper dive into issues surrounding DevOps. The next post in this series will explore continuous integration in DevOps.
We welcome your feedback. What issues surrounding DevOps do you want to know more about? What challenges is your organization facing in adoption? Please leave feedback in the comments section below.
Additional Resources
To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visithttp://url.sei.cmu.edu/js.
To view the August 2011 edition of the Cutter IT Journal, which was dedicated to DevOps, please visit http://www.cutter.com/promotions/itj1108/itj1108.pdf.
Additional resources include the following sites:
http://devops.com/
http://dev2ops.org/
http://www.evolven.com/blog/devops-developments.html
http://www.ibm.com/developerworks/library/d-develop-reliable-software-devops/index.html?ca=dat-
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:49pm</span>
|
We want to make better use of social media to provide a more professional and consistent experience for our claimants. Later this Autumn we’ll be testing and learning by giving access to Facebook and YouTube to staff in 3 Jobcentre sites: London Bridge, Newport and Rusholme.
A new Facebook page called Find Share Connect will be our flagship social media channel. It will provide a space to bring together claimants and employers and give advice on job search, training opportunities and support recruitment activity by partners. Find Share Connect will be a national page but during the test will have a local flavour with jobs content tailored to the three test sites.
You can check out some of what we’re planning do in this exciting pilot by watching this film. We will also share the experience through Twitter and future updates on this blog.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:49pm</span>
|
By C. Aaron Cois Software Engineering Team Lead CERT Cyber Security Solutions Directorate
This post is the latest in a series for organizations implementing DevOps.
A DevOps approach must be specifically tailored to an organization, team, and project to reflect the business needs of the organization and the goals of the project.
Software developers focus on topics such as programming, architecture, and implementation of product features. The operations team, conversely, focuses on hosting, deployment, and system sustainment. All professionals naturally consider their area of expertise first and foremost when discussing a topic. For example, when discussing a new feature a developer may first think "How can I implement that in the existing code base?" whereas an operations engineer may initially consider "How could that affect the load on our servers?"
When an organization places operations engineers on a project team alongside developers, it ensures that both perspectives will equally influence the final product. This is a cultural declaration that in addition to dev-centric attributes (such as features, performance, and reusability), ops-centric quality attributes (such as deployability and maintainability) will be high-priority.
Likewise, if an organization wants security to be a first-class quality attribute, a team member with primary expertise in information security should be devoted to the project team.
Every Thursday, the SEI Blog will publish a new blog post that will offer guidelines and practical advice to organizations seeking to adopt DevOps.
We welcome your feedback on this series as well as suggestions for future content. Please leave feedback in the comments section below.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:49pm</span>
|
Everyone has those moments when they feel blue or out-of-sorts. Most of the time these feelings come and go. But when sadness persists for more than two weeks it could be a signal of something bigger: depression that may affect a person’s ability to function at work, at home, or in other aspects of their life. As an employer, you may notice a reduction in an employee’s productivity, an increase in absences or a shift in their overall disposition. Depression is a common illness that affects more than...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:49pm</span>
|
"I have a dream" a man once said. He went on to describe exactly what that dream was. His articulation was so successful that many of those listening heard a call to arms and understood that it was the start of a journey that they all wanted to be part of. He really knew how to take people with him.
My name is Rachel Woods and I’m the Product Owner for the Secure Communications Project. It’s my role to own, champion and protect the scope of the project. I want to describe the scope of our project to you and bring you along for the ride… I just need to work out what it is first and that’s not easy.
Herding ideas
Working in Government is wonderful and collaborative and hard. A lot of the great ideas in Government are not championed by individual dreams but are born from conversations which then develop and morph as more minds enter the picture, and we have a lot of (rather wonderful) minds in DWP.
When I started on the project about 4 weeks ago a lot of people had already formed their own idea of what we were doing. The difficultly was that there was such a diversity of ideas that it was difficult to pinpoint not only what we might want to try to deliver but what our exam question was. There were so many questions we could try to answer.
Getting the right people in the room
Over the last few weeks the whole team has worked together and corralled all the outputs of those wonderful minds, gradually discounting them or putting them into a backlog until we were left with something that looked like it might be our scope. The criteria we used to de-scope was fairly wide but included some obvious things like ‘it’s being looked at elsewhere’. Each time we made a leap forward it was always down to one thing: a visit - getting the right people around our wall and having a really good discussion not only about what we are doing but why. Stakeholders are vital, but working on scope in the discovery phase has really brought home to me how important it is to have engaged stakeholders who are not afraid to challenge. Without challenge we don’t change and we don’t improve.
Show the Thing… even if the ‘thing’ is just your thinking
Today we had our first face to face Show and Tell and we took stakeholders on the journey we had been on, literally walking them through all our working out on our many, many walls to show how we have arrived at our current thinking. We were looking for buy in and reassurance that if we do eventually build something it will be ‘the right thing’ for the right users.
Show the Thing…
The result? We now know what the exam question is. Our next challenge is to take our messy workings out and decant them into a clear, concise and engaging explanation to share with those that couldn’t be with us.
If you get the chance its worth revisiting the vision outlined by Martin Luther King Jr and looking at how far it has come. I’m sure if he was Product Owner for his project then he wouldn’t be signing it off as ‘done’ but I think he would be proud of what progress has been made. Me too.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:49pm</span>
|
By Grace LewisPrincipal Investigator, Edge-Enabled Tactical SystemsSoftware Solutions Division - Advanced Mobile Systems (AMS) Initiative
Soldiers in battle or emergency workers responding to a disaster often find themselves in environments with limited computing resources, rapidly-changing mission requirements, high levels of stress, and limited connectivity, which are often referred to as "tactical edge environments." These types of scenarios make it hard to use mobile software applications that would be of value to a soldier or emergency personnel, including speech and image recognition, natural language processing, and situational awareness, since these computation-intensive tasks take a heavy toll on a mobile device’s battery power and computing resources. As part of the Advanced Mobile Systems Initiative at the Carnegie Mellon University Software Engineering Institute (SEI), my research has focused on cyber foraging, which uses discoverable, forward-deployed servers to extend the capabilities of mobile devices by offloading expensive (battery draining) computations to more powerful resources that can be accessed in the cloud, or for staging data particular to a mission. This blog post is the latest installment in a series on how my research uses tactical cloudlets as a strategy for providing infrastructure to support computation offload and data staging at the tactical edge.
Cloudlet-Based Cyber Foraging
Our research—in addition to myself, the team of research includes Sebastian Echeverria, Soumya Simanta, Ben Bradshaw, and James Root—focuses on cloudlet-based cyber foraging. Cloudlets, a concept created by Mahadev Satyanarayanan (Satya) of Carnegie Mellon University’s School of Computer Science, are discoverable, generic, stateless servers located in single-hop proximity of mobile devices. Cloudlets can operate in disconnected mode, which means that communication with the central core is only needed for provisioning. They are also virtual-machine (VM) based, which means that they promote flexibility and mobility, a perfect match for edge environments.
Cyber foraging involves dynamically augmenting the computing resources of resource-limited mobile devices by exploiting a fixed computing infrastructure in close proximity. Cyber-foraging allows mobile users to offload computationally-expensive processing (such as face recognition, language translation, speech and image recognition) from a mobile device onto more powerful servers, thereby preserving device battery power and enabling more powerful computing. These capabilities are valuable for soldiers or emergency workers who often operate in tactical edge environments where these resource-intensive applications must be deployed reliably and quickly.
As described in our paper that we recently presented at MilCom2014 (We will update the link when it becomes available), Tactical Cloudlets: Moving Cloud Computing to the Edge, we created the following five different ways of doing cloudlet provisioning:
In optimized VM synthesis—described in our first blog post in this series, Cloud Computing for the Battlefield—the cloudlet is provisioned from the mobile device at runtime. VM synthesis involves large application overlay files, which can be costly to transfer in terms of battery and network bandwidth consumption in mobile and edge environments. The application overlay is built offline and corresponds to the binary difference between a base VM and that VM after the server portion of an application is installed. After the VM overlay has been transferred, it is applied to the base VM. The result is a complete VM that is running the server portion of the application that is executed from a client running on a mobile device. Due to the large size of the application overlay files, the battery and network bandwidth consumption proved too expensive for mobile and edge environments. As an alternative, we started looking at application virtualization as a possible solution to this problem.
In Application Virtualization—described in our second blog post, Application Virtualization for Cloudlet-Based Cyber Foraging at the Edge—the cloudlet is also provisioned from the mobile device at runtime. Application virtualization uses an approach similar to operating system (OS) virtualization, by tricking the software into interacting with a virtual rather than the actual environment. A runtime component intercepts all system calls from an application and redirects these to resources inside the virtualized application. The virtualized application that is sent from the mobile device to the cloudlet at runtime is much smaller than an application overlay, but still large for transfer in edge environments.
In Cached VM, the cloudlet is pre-provisioned with service VMs that correspond to mission-specific capabilities that match the client apps on the mobile device. Each service VM has a unique service identifier.
In Cloudlet Push, the cloudlet is not only pre-provisioned with service VMs for mission-specific capabilities, but also the corresponding mobile client apps. At runtime, the cloudlet client obtains a list of available applications on the cloudlet, similar to accessing an app store. It then checks if the selected application exists for the mobile device’s OS. If so, the cloudlet client receives the app and installs it on the mobile device while the cloudlet server starts the corresponding service VM.
In On-Demand VM Provisioning, a commercial cloud provisioning tool is used to assemble a service VM at runtime. In this case the cloudlet has access to all the elements for putting together a service VM based on a provisioning script. The experimental prototype uses Puppet, and the provisioning script is a manifest that is written in Puppet's declarative language.
Over the last year, we ran several experiments to help us make a better decision on what we believe is the best cloudlet provisioning mechanism for the edge. We ultimately determined that Cached VM combined with Cloudlet Push, would be the most effective cloudlet provisioning mechanism because using both mechanisms
enabled lower energy consumption on the mobile device
placed less requirements on the mobile devices
simplified provisioning in tactical environments
An added advantage of combining Cached VM and Cloudlet Push is that if the mobile device already has the client app, it can simply invoke the matching service VM; if not, it can obtain the client app from the cloudlet (similar to accessing an app store) and then invoke the matching service VM.
The tradeoff of this approach is that it relies on cloudlets that are pre-provisioned with server capabilities that might be needed for a particular mission. Another tradeoff is that the cloudlet is connected to the enterprise, even if just at deployment time, to obtain the capabilities. To understand how we reached this conclusion, it is important to examine the results of our experiments, which relied on three computation-intensive applications that are often relied upon by soldiers and emergency workers in tactical edge environments:
facial recognition (FACE)
speech recognition (SPEECH)
object recognition (OBJECT)
We used a Galaxy Nexus with Android 4.3 as a mobile device and a Core i7-3960x based server with 32 GB of RAM running Ubuntu 12.04 as the cloudlet. We created a self-contained wireless network (using Wi-Fi 802.11n at 2.4 GHz, 65 Mbps) to be able to isolate network traffic effects. Energy was measured using a power monitor from Monsoon Solutions.
The results of our experiments are shown in Table I below. The first column under each mechanism is the size of the payload in MB that is sent from the mobile device to the cloudlet for provisioning. The second column is application-ready time, measured as the time in seconds from the start of the provisioning process until the cloudlet responds that it is ready. The third column in the energy consumed on the mobile device during application ready time.
To understand how we reached the conclusion that a combination of cached VM and cloudlet push would be the best at tactical cloudlet provisioning, it is important to trace our logic, as well as the steps taken to reach our conclusion.
As Column 1 in the table above illustrates, the problem with VM synthesis is the payload (referred to as the cargo of a data transmission) size, which is large. The payload size is large because the mobile device carries the computation that is going to be offloaded, which proved to be a problem for tactical edge environments.
As many other researchers have noted—and as can be seen in Column 3 under VM synthesis—energy consumption has a linear correlation with the payload size. Communication typically consumes the most battery energy on a mobile device.
From VM Synthesis we turned to Application Virtualization in the hopes that we could address the large payload size problem, which led us to ask, Could we package applications in such a way as to reduce the size of what is transferred from the mobile device to the cloudlet?
Even though Application Virtualization significantly reduced the payload size (from 332 megabytes to 29 megabytes in object recognition) those payloads are still too large to be effective for soldiers in a hostile environment with limited resources and precious few seconds to spare. Under these types of constraints, it is important to ensure that the packaging is done correctly. If not, the application is not going to work.
We took a step back and asked, In edge environments, with the soldiers and first responders that we are targeting, is it always the case that to offload computations, they have to carry the offloadable computation with them? To answer this question, our team of researchers next considered the Cached VM approach. As part of our experiments we pre-previsioned cloudlets with computations that might be expected for a particular mission. This configuration would enable a soldier or first responder to inquire for cloudlets that already have the needed capabilities.
While Cached VM significantly reduced the payload size (almost to zero) as well as application ready time and energy consumption, the approach still presents a problem if a soldier or emergency responder is not able to access a needed application as a result of a changing mission or circumstance. We next experimented with Cloudlet Push. In doing so, we decided to not only pre-provision the cloudlet with service VMs that are needed for a particular mission, but also provision it with the corresponding software client applications for the mobile device, similar to an app store. With cloudlet push, the question asked by the soldier changes from
Do you have this computation?
to
How can you help me? What computation do you have?
Next, we considered On-Demand Provisioning. To use this mechanism, we used a commercial cloud provisioning tool to assemble a service VM at runtime. In this case, the cloudlet has access to all of the elements to put together a service VM based on a provisioning script. Our implementation relied on Puppet and the provisioning script is a manifest that is written in Puppet’s declarative language.
The benefits of On-Demand Provisioning include a small payload size, as well as a service VM that can be assembled at runtime. The drawbacks of this mechanism include a longer application-ready time. Also, the cloudlet needs to have all of the required server code components, or access to the components from enterprise repositories or code distribution sites. Overcoming these drawbacks led us to combine Cached VM and Cloudlet Push, which together consume less energy because the payload size is smaller, which in turn leads to shorter and more consistent application-ready times.
Tactical Cloudlets: Future Research
The next quality attribute that we will focus on in our research is trust, in particular trusted identities. For example
As a mobile device, is what I discovered really a friendly cloudlet?
As a cloudlet, did that offloading request really come from a friendly mobile device?
Our current cloudlet implementation relies on the security provided by the network; that is, a mobile device is allowed to interact with a cloudlet according to network policies and permissions. This means that the cloudlet implementation is as secure as the network. While this may be acceptable in many domains, it is likely not enough for tactical environments.
A key aspect of cloudlets is that they are discoverable. The cloudlet client that is installed on a cloudlet-enabled mobile device uses multicast DNS to query for cloudlets (set up as cloudlet services by the discovery service that runs on the cloudlet). Multicast DNS protocols are known to be insecure. However, securing the discovery process is not the problem because port scans or other probing methods can easily bypass discovery.
A common solution for establishing trust between two nodes is to use a third-party, online trusted authority that validates the credentials of the requester or a certificate repository. The characteristics of tactical edge environments do not consistently provide access to that third-party authority or certificate repository, however, because they are disconnected, intermittent, limited (DIL) environments.
Our future research will explore solutions for establishing trusted identities in disconnected environments. Even though the motivation comes from cloudlets, the goal is for the results to be applied to any form of trusted communication between two or more computing nodes. A review of related work shows that this is indeed a challenge, and there are many relevant and interesting ideas, but not very many concrete solutions.
We welcome your feedback on our research. Please leave feedback in the comments section below.
Additional Resources
To register for an upcoming webinar on my research on tactical cloudlets, which will be held from 1:30 to 2:30 p.m. ET December 10, 2014, please visit this link.
My previous research in this field focused on cloudlets provisioned using VM synthesis, which was described in the SEI technical note Cloud Computing at the Tactical Edge. To read this technical note, please visithttp://resources.sei.cmu.edu/library/asset-view.cfm?assetID=28021.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:49pm</span>
|
When the Carer’s Allowance Digital Service (CADS) project went through discovery (long before I joined the team), they were asked to build and iterate a first class customer facing service but also begin the transformation of the Carer’s Allowance digital ‘back office’ process. Work commenced on that transformation with the building of the Carer’s Allowance Staff Access (CASA) tool.
CASA is a replacement for the twisting unseen ‘pipes’ that the digital transaction slips and slides its way through and a replacement for the big pool the transaction ends up in. Sounds fairly easy but it has been a long road. The CADS team has had to break new ground in many areas within the DWP to implement this new infrastructure.
But anyway, onto some of the CASA benefits that the Carer’s Allowance Unit should see: -
Automation of a data input task that previously took 6 hours a day to complete; this has been solved with 3 clicks of a button.
CASA uses data, that the customer has input, to produce a summary sheet that staff previously had to manually complete.
The layout of the system is easier for staff to navigate than the previous system.
CASA is built to Government Digital Service and DWP accessibility standards
The output used by Carer’s Allowance staff now has a more logical processing flow.
Sounds like obvious stuff? Well, it kind of is. The beauty of CASA is that it’s not doing anything radical in terms of what it delivers, its meeting the needs of the Unit and staff. Previously the Unit has been restricted by the IT infrastructure it used, which meant that it couldn’t change any processes after the customer clicked submit on their digital transaction. Simply put, CASA makes the Carer’s Allowance digital processing simpler and faster, meaning they can focus on processing claims and making decisions quickly for customers of the digital service.
The building of CASA has been driven by the Carer’s Allowance Unit requirements. The Carer’s Allowance staff have had direct input at every stage to shape and form how it will look and how they will interact with it.
CASA went ‘Live’ on 21/10/2014. Just because the service has gone live, however, doesn’t mean the CADS team is done; the first release of CASA is the Minimal Viable Product. Work has already begun to iterate CASA with the help of the Carer’s Allowance staff and provide yet more improvements to processing digital claims which benefit the Carer’s Allowance Unit and its digital customers.
Hopefully the work carried out by the CADS team will pave the way for other areas of DWP to transform its IT infrastructure.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:48pm</span>
|
By C. Aaron CoisSoftware Engineering Team Lead CERT Cyber Security Solutions Directorate
This post is the latest in a weekly series to help organizations implement DevOps.
Melvin Conway, an eminent computer scientist and programmer, create Conway’s Law, which states: Organizations that design systems are constrained to produce designs which are copies of the communication structures of these organizations. Thus, a company with frontend, backend, and database teams might lean heavily towards three-tier architectures. The structure of the application developed will be determined, in large part, by the communication structure of the organization developing it. In short, form is a product of communication.
Now, let’s look at the fundamental concept of Conway’s Law applied to the organization itself. The traditional-but-insufficient waterfall development process has defined a specific communication structure for our application: Developers hand off to the quality assurance (QA) team for testing, QA hands off to the operations (Ops) team for deployment. The communication defined by this non-Agile process reinforces our flawed organizational structures, uncovering another example of Conway’s Law: Organizational structure is a product of process.
As the figure shown above illustrates, siloed organizational structures align with sequential processes, e.g., waterfall methodologies. The DevOps method of breaking down these silos to encourage free communication and constant collaboration is actually reinforcing Agile thinking. Seen in this light, DevOps is a natural evolution of Agile thinking, bringing operations and sustainment activities and staff into the Agile fold.
Every Thursday, the SEI Blog will publish a new blog post that will offer guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:48pm</span>
|
You may be aware of the tragic passing of Dave Goldberg, husband of Facebook COO and anticipated SHRM15 keynote speaker Sheryl Sandberg. As might be expected, Sheryl is not able to join us in Las Vegas for our Annual Conference & Exposition. We extend our deepest condolences to Sheryl, and understand her need to be with family during this difficult time. In Ms. Sandberg's place, I am pleased to announce that New York Times bestselling author and Morning Joe cohost Mika Brzezinski will be speaking...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:48pm</span>
|
Most people claiming Carer’s Allowance go online instead of asking for a form to be posted to them. That’s because we’ve made a digital service that really is simpler, clearer and faster than putting pen to paper.
Over a third of our digital claims are being completed on tablets and mobile phones - our service works on any device so people can apply for Carer’s Allowance when it’s convenient for them.
So far, so good, but brilliant digital is a walk towards the horizon, not a climb to a mountaintop. We’re constantly researching and testing with real users, taking what we learn to change and improve the service.
We also know that no matter how good the digital service is, some people will continue to need help and support to use it.
Supporting our customers
The standard of customer service at the Preston-based Carer’s Allowance Unit is recognized as exceptional, and that’s a great starting point for supporting those who would otherwise find a digital service difficult.
There’s never been a ‘claim by phone’ option but the Carer’s Allowance Unit has always provided great telephone support to its customers for new claims, changes of circumstance and general advice about the benefit. We’ll continue to do this as the first part of our assisted digital service. Many people just need the answer to a simple question or guidance on how to describe their personal circumstances. Others may need more, so we’re researching to find out if there are user needs that we haven’t yet seen evidence of, rather than a one size fits all service.
Finding out which kind of help people need
When a customer calls and asks for a paper claim form, we always suggest using the digital service. If the caller says they don’t want to, or can’t go online to claim, we ask a few more questions to find out if they could do it with some help. From there we can provide advice on where to get web access or face-to-face support. This means that we’re not turning anyone away from the digital service.
For ‘real-world’ support, we’re piloting using DWP’s huge network of Jobcentre Plus offices as drop-in locations for both internet access and personal support. To strengthen this option further, we’re establishing a network of Carer’s Allowance experts in the Jobcentres. Like the digital service we’ll need to learn and iterate to get this right.
Owning the whole service
I’m responsible for the entire Carer’s Allowance Service. For me, this means that the digital components are key to me maintaining our high standards and, of course, making the unit more efficient.
We’ve embraced the digital service as part of the core Carer’s Allowance business - it isn’t just a separate function that plugs into the unit. We’ve been able to do this because the team that writes the code, designs the interactions and does the research works right here in the same building, as part of the same business. Like we say very proudly in the footer of the digital service "Carer’s Allowance - made in Preston".
Because the digital service is ‘ours’, the contact-centre staff - again, all working here, in the same office in Preston - are confident about telling users that it’s the best way to make a claim. They’re also able to influence changes and improvements. This all helps make the service more useful and usable for customers and helps staff work more efficiently.
The Carer’s Allowance digital service is ours, we’re proud of what we’re continuing to build, passionate about promoting it and committed to making sure everyone can use it.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:47pm</span>
|
By Nader MehravariSenior Member of the Technical StaffCERT Cyber Risk Management Team
This blog post was co-authored by Julia Allen and Pamela Curtis.
Earlier this month, the U.S. Postal Service reported that hackers broke into their computer system and stole data records associated with 2.9 million customers and 750,000 employees and retirees, according to reports on the breach. In the JP Morgan Chase cyber breach earlier this year, it was reported that hackers stole the personal data of 76 million households as well as information from approximately 8 million small businesses. This breach and other recent thefts of data from Adobe (152 million records), EBay (145 million records), and The Home Depot (56 million records) highlight a fundamental shift in the economic and operational environment, with data at the heart of today’s information economy. In this new economy, it is vital for organizations to evolve the manner in which they manage and secure information. Ninety percent of the data that is processed, stored, disseminated, and consumed in the world today was created in the past two years. Organizations are increasingly creating, collecting, and analyzing data on everything (as exemplified in the growth of big data analytics). While this trend produces great benefits to businesses, it introduces new security, safety, and privacy challenges in protecting the data and controlling its appropriate use. In this blog post, I will discuss the challenges that organizations face in this new economy, define the concept of information resilience, and explore the body of knowledge associated with the CERT Resilience Management Model (CERT-RMM) as a means for helping organizations protect and sustain vital information.
A New Information Economy
The information economy is transforming every public and private sector, including the way we deliver healthcare and educational services, fight wars and provide national security, design and operate critical infrastructure, build cities and communities, and manufacture goods. The following are some characteristics of the current environment that indicate we now operate in an information economy:
Intangible goods (e.g., information, ideas, and intellectual assets) continue to increase in absolute value and relative volume. This trend is apparent from the fact that the market capitalization of the largest entities in the world is increasingly based on the value of their information assets (e.g., customer records patient information, intellectual property, trade secrets, consumer purchasing and browsing data, new product specifications), and not solely their physical assets (e.g., land, buildings, equipment, and raw material). For businesses, information has moved from a supporting role to a leading role in determining mission success to the point that information is now among the highest valued assets, products, and services. It is almost as if the bits that make up the information are more important than the atoms that make up the infrastructure and the environment that the information resides in.
Physical and cyber (i.e., virtual) worlds are increasingly intertwining, and their boundaries have blurred. This change is reflected in the fact that virtual goods and environments are replacing their physical counterparts (e.g., virtual grocery store coupons; virtual business parks; virtual ballot box; virtual offices; virtual jobs; virtual stock markets; virtual water coolers such as cloud-based collaboration tools, online chat groups, and other types of social media). Even bank robberies are virtual, and can be instigated from a different continent. People (the initial users of the Internet), businesses (the next generation of Internet users in the form of eCommerce), and things (in the sense of the Internet of Things) are converging into a ubiquitous evolutionary information marketplace.
Content (i.e., information) developers and owners are the kings, not the equipment manufacturers (on whose systems the content is viewed) or the communication service providers (through which the content is delivered).
Immediacy is valued more than thoughtfulness, correctness, and the absence of defects. Technological advancements are developed and put into practice at a speed that makes them inherently unpredictable and often disruptive. Every innovation comes with flaws that someone will eventually exploit for personal gain. This technological whirlwind has thus spawned an ever expanding and dynamic operational risk environment.
Evolving in the Risk Environment
One of the primary success factors in the information economy is the manner in which organizations protect their information assets while operating in a volatile risk environment. Traditional IT security has been the underpinning of e-commerce. Without it, businesses and consumers would not have had the trust and confidence to use the Internet. For organizations to survive and thrive in today’s information economy, however, they must manage risks to information assets in terms of all forms of technology that create, process, store, disseminate, and consume information, They include conventional information technologies, the ever-changing operational technology (OT), such as industrial control systems, physical access control mechanisms, etc., as well as the rapidly evolving and expanding Internet of Things (IoT).
Traditional IT security has focused on the management of security risks within an organization’s enterprise IT environment, often performed by an IT security organization. Meanwhile, separate teams manage risk associated with other forms of technology, such as operational technologies (OT) that monitor and control physical devices and processes (e.g., industrial control systems) and where non-IT work processes are involved.
With the blurring of boundaries in the information economy, organizations must consider risk assessment and management across all forms of technology and assets. A more consistent and unified approach to information risk management will result in increased confidence and greater assurance that realized risk will not affect an organization’s ability to achieve its business mission.
Moreover, risks associated with various forms of technology are only one dimension of operational risk in today’s information economy. Proper protection and sustainment of information assets must go beyond technology-focused risk management activities. Traditional concepts of IT security (and the closely associated concepts of information security, information assurance, and cybersecurity) must evolve beyond their current technology scopes and be augmented by techniques and concepts from such domains as physical security (of tangible assets), safety (of people assets), and privacy (of personal information).
Information Resilience
The next step for organizations to consider is information resilience, which we define as the ability to protect and sustain critical information assets throughout their entire lifecycle (whether they are being created, processed, stored, disseminated, or destroyed) regardless of where such assets physically reside at any point in their lifecycle. In addition to preventing and defending against disruptions to technologies, information resilience emphasizes response and continuity during times of stress across technologies, people, and facilities.
Information resilience is concerned with information assets in two separate but interrelated dimensions:
lifecycle dimension - the stages of information creation, processing, storage, dissemination, and destruction (where some of the stages may take place in different order and/or in parallel)
containment dimension - the containment of information in technology, people, and/or facility assets at any point in its lifecycle
Information resilience addresses the entire operational risk landscape for information assets that are critical for enabling the organization’s mission success in the information economy. Protection and sustainment considerations, therefore, apply to information assets in every aspect and intersection of these dimensions.
Given the proliferation of data typical in the information economy, information assets must be prioritized. For example, critical information assets may be those that are used in mission-critical services, information entrusted to the business by others, intellectual property, or information essential to operation of the business, such as vital records and contracts. Identifying critical assets makes all other information resilience practices feasible.
Profiles can be created for critical information assets to specify their confidentiality, availability, integrity, custody, privacy, sensitivity, and acceptable use requirements (collectively, resilience requirements). They can also be used to characterize the individuals (e.g., employees, suppliers, customers, contractors, regulators) who have access to them, containers (e.g., people, devices, facilities) on which the information resides, the units (e.g., systems, applications, brains) in which it is processed, and the environment (e.g., electronic networks, transportation infrastructure) over which it may be transferred.
Satisfying the resilience requirements of critical information assets requires continuous risk management (i.e., identifying risks whenever and wherever information assets are created, stored, transported, and processed, assigning dispositions to risks, and mitigating or otherwise handling risks). Administrative, technical, and physical protection controls must be applied as appropriate to meet resilience requirements. Configuration control should be used to establish baselines and point-in-time captures of information assets.
Methods for Managing Information Resilience
Approaches for assessing, improving, and managing information resilience could be ad hoc or based on some structured methodology. Information resilience requires involvement and contributions from such domains as IT, OT, IoT, physical security, safety, and privacy. Therefore, an ad hoc approach will likely not produce the desired and sustainable results. Efforts to assess, improve, and manage information resilience should be based on proven and structured approaches that provide repeatable, predictable, high-quality outcomes. Use of such comprehensive and flexible frameworks as the CERT Resilience Management Model (CERT-RMM) and the associated body of knowledge can help in achieving a sustainable capability.
There are many effective and appropriate ways for an organization to use CERT-RMM to guide, inform, or otherwise support improvements to its information resilience management activities. For those familiar with the concept of process improvement, CERT-RMM can be used as the body of knowledge that supports model-based process improvement activities for information resilience management principles and practices. Alternatively, a targeted improvement roadmap (which is a term used to designate a specific collection of CERT-RMM domains that a collectively address a specific objective), has been defined to assist organizations in planning and guiding their journey towards enterprise-wide information resilience.
Benefits of using such structured frameworks include
enabling native incorporation of operational risk management principles and practices into the organization’s cultural norm or DNA ( i.e., it is integrated into the normal course of rhythm of the business)
ensuring that risk-based activities align with organizational risk tolerances and appetite
serving as the starting point for socializing important harmonization and convergence principles across IT, OT, IoT, physical security, privacy, etc.
facilitating collaboration between activities that have similar operational risk management objectives
maintaining a business mission focus
improving confidence in how an organization responds in times of operational stress
enabling measurements of effectiveness
enabling institutionalization and culture change
guiding improvement in areas where an organization’s capability does not equal its desired state
Cost-Benefit Issues
Several issues should be considered if an organization wants to maximize the value of adopting an information resilience approach. First, the need to protect information assets must be continually balanced with the need to run the business. Second, an objective of total prevention of disruptive events that negatively affect information assets (e.g., cyber-attacks) is not practical. And third, treating all risks and all information assets equally is not cost effective. The most important information assets must be identified and the protection and sustainment measures related to them prioritized.
Looking Ahead
Today’s information economy has reshaped the organizations, industries, and communities that we are part of, creating new technological capabilities and business opportunities while blurring digital and physical worlds. At the same time, however, it has created new security challenges that feed an ever more dynamic and expanding risk environment that is simply beyond the scope of a traditional IT security function. A more consistent and unified approach to information risk management will result in increased confidence and greater assurance that realized risk will not affect an organization’s ability to achieve its business mission.
Information resilience is such an approach. It is a more overarching, and manageable, concept that promises to be a key tool in our tool box of techniques for protecting and sustaining organizations’ critical information assets and associated dependent business products, services, and missions. It is a critical dimension of operational resilience as defined by CERT-RMM, which, in addition to information asset resilience, addresses the resilience of technology, people, and facility assets.
Additional Resources
For more information about CERT’s Resilience Management Model (CERT-RMM)
http://www.cert.org/resilience/products-services/cert-rmm/
https://www.csiac.org/spruce/resources/ref_documents/recommended-practices-managing-operational-resilience
For more information about information economy and related transformations
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/206944/13-901-information-economy-strategy.pdf
http://www.gartner.com/technology/research/digital-business/
http://www.forbes.com/sites/gartnergroup/2014/05/07/digital-business-is-everyones-business/
http://fortune.com/2014/02/27/box-ceo-how-will-your-company-compete-in-the-information-economy/
http://mitsloan.mit.edu/ide/
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:47pm</span>
|
The recent announcement about Sheryl Sandberg being unable to join us in Las Vegas at SHRM’s Annual Conference is no doubt disappointing to a lot of people - myself included. But at the same time who can fault her for wanting to and needing to be with her family during this difficult time. I have been reading a lot about the sudden passing of her husband, Dave Goldberg and it got me thinking about a lot of things. First the outpouring of condolences, thoughts and prayers from so many people that Sheryl and Dave both knew and did not...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:46pm</span>
|
It has been about 6 months since we started to build the Business Design community at DWP. Some of our community are already finding that being able to operate in these roles is a great way to develop their career. When we set up our Business Transformation group we recognised that DWP needed to build the great digital services that customers expect from government, but equally that we had to make a step change in the way the core of the business operates. Creating a joined-up business design is a critical part of building a more modern and efficient DWP, with customers at the heart of our thinking.
We brought together the Business Design community to join up across the delivery programmes within the department, and this team has been co-creating the blueprint for the department.
To get the design work moving quickly, we wanted to follow a recognised approach rather than re-invent any wheels. We supplemented the established DWP teams with a small number of experienced business designers from outside the department. In the longer term we want to build a sustainable function within the department, so we will be transferring knowledge and upskilling our teams.
The skills we need fall into two broad areas:
Consultancy skills - building our skills in understanding problems and influencing stakeholders like our in-flight change programmes.
Technical design skills - building a "tool bag" of techniques to design the business.
We’re working on both areas but for this blog wanted to focus on some useful technical design skills for the designers/architects operating at various different levels in DWP. We selected the following tools to explore in our Business Design Academy:
One-Page Strategy - seeking the answers to a series of key questions and presenting it on one page.
Andrew Campbell’s "9 tests" of Organisation Design - expose unavoidable trade-offs and assess the advantages and disadvantages of different designs.
Organisation charting techniques - describing the relationships between different types of organisation units.
Business Capability mapping - showing individual business capabilities in relationship to each other, enabling us to see the larger context and align across our people, technology and processes.
Design Principles - building a link from our business strategy into ‘rules’ that will guide design decisions. For DWP our design principles will follow our Guiding Principles for business transformation.
DWP Business Transformation - Guiding Principles
Customer Segmentation Models - dividing customers into groups based into distinct needs so that they can be treated in similar ways.
We’ve had lots of interest in building our tools and techniques for business design, and some of our community have already shown that being able to operate in these roles is emerging as a great way to develop their career.
Follow Andrew on Twitter and don’t forget to sign up for email alerts.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:46pm</span>
|