Blogs
By Joe YankelMember of the Technical StaffCERT Cyber Security Solutions Directorate
This post is the latest installment in a series aimed at helping organizations adopt DevOps.
Docker is quite the buzz in the DevOps community these days, and for good reason. Docker containers provide the tools to develop and deploy software applications in a controlled, isolated, flexible, highly portable infrastructure. Docker offers substantial benefits to scalability, resource efficiency, and resiliency, as we’ll demonstrate in this posting and upcoming postings in the DevOps blog.
Linux container technology (LXC), which provides the foundation that Docker is built upon, is not a new idea. LXC has been in the linux kernel since version 2.6.24, when Control Groups (or cgroups) were officially integrated. Cgroups were actually being used by Google as early as 2006, since Google has always been looking for ways to isolate resources running on shared hardware. In fact, Google acknowledges firing up over 2 billion containers a week and has released its own version of LXC containers called lmctfy, or "Let Me Contain That For You."
Unfortunately, none of this technology has been easy to adopt until Docker came along and simplified container technology, making it easier to utilize. Before Docker, developers had a hard time accessing, implementing, or even understanding LXC let alone its advantages over hypervisors. DotCloud founder and current Docker chief technology officer Solomon Hykes was on to something really big when he began the Docker project and released it to the world as open source in March 2013. Docker's ease of use is due to its high-level API and documentation, which enabled the DevOps community to dive in full force and create tutorials, official containerized applications, and many additional technologies. By lowering the barrier to entry for container technology, Docker has changed the way developers share, test, and deploy applications.
How can Docker help us in DevOps? Well, developers can now package up all the runtimes and libraries necessary to develop, test, and execute an application in an efficient, standardized way and be assured that it will deploy successfully in any environment that supports Docker.
Initial reactions to container technology often compare containers to small virtual machines. However, the advantages of containers over VMs becomes apparent with regards to performance. In particular, a Dockerized application starts quickly, without the need to perform all of the steps associated with starting a full operating system. These containers share the operating system kernel, and other binaries and libraries where appropriate. Below is an image from the Docker website that highlights the differences. In particular, note how containers incur much less time and space overhead than virtual machines.
Docker vs Virtual Machine
Another great feature is the built-in versioning that Docker provides. This "git-like" versioning system can track changes made to a container, and both public and private repositories (if your organization desires or requires) can be used to store the versioned containers.
Docker had a big impact in 2014, and in 2015 you can expect even greater adoption by both small and large companies. This uptake is evident since Docker support has quickly been adopted by major cloud services, such as Amazon Web Services and Microsoft Azure.
We expect Docker to play a key role in future conversations that focus on designing, building, and deploying applications, especially with the guarantee that an application will run in a production, or customer environment, just as it did during development and testing. A few weaknesses become evident when it comes to communication between Docker containers running on different servers, but this will only improve with time. You can also expect some competition around the corner in 2015. If you haven't tried Docker yet, definitely give it a try. This technology is really just beginning to fire on all cylinders and so much more is to come.
Every two weeks, the SEI will publish a new blog post that will offer guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
Additional Resources
To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js.
To read all the installments in our weekly DevOps series, please click here.
Published in the series thus far:
An Introduction to DevOps
A Generalized Model for Automated DevOps
A New Weekly Blog Series to Help Organizations Adopt & Implement DevOps
DevOps Enhances Software Quality
DevOps and Agile
What is DevOps?
Security in Continuous Integration
DevOps Technologies: Vagrant
DevOps and Your Organization: Where to Begin
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:36pm</span>
|
The month of May has always been my favorite month of the year. I was born and raised in Speedway, Indiana, and if you don't know about this small town, it is famous for the Indianapolis 500. The Indianapolis Motor Speedway is the lifeblood of the town and community and it will always hold a special place in my heart. The month also brings some of my greatest heartaches. I lost a dear friend, in 2013, on race day. We attended the race together in 2012 as this was a "bucket...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:36pm</span>
|
Our vision for DWP is to put user needs at the heart of our thinking, delivering the policy intent through digital services which we continuously improve. We recognise there is an opportunity for us to deliver some of our services in a much more modern and efficient way, by collecting and using data in a more joined-up way, automating whenever it’s safe to do so, constantly improving our services and reacting rapidly to user feedback.
We’re working on DWP’s Business Design to help communicate and join-up all of our change efforts across the organisation.
The six Enablers in DWP’s Business Design
To summarise our more detailed work, we’re highlighting six critical ‘Enablers’ of our transformation journey. We are integrating these throughout our design, so that they become part of the fabric of how we do things at DWP:
Secure self-service wherever possible - our ability to create simple, secure and responsive online services. These need to be so good and so trusted that our customers choose to use them whenever they can in preference to other channels.
Decision-making based on trust and risk - our ability to understand the trust level and risk associated with each individual transaction we process, so that we can spot patterns and intervene when we need to. Risks could be risk to a customer (e.g. due to their health condition) or to government (e.g. fraud).
Intelligent data use, sharing and management - our ability to use data to drive more efficient services. Integrating and sharing data within DWP and beyond, supported by the right technology and data science skills.
Advanced analytics for segmentation - our ability to identify customers who may be vulnerable or require a different level of service, so that we can offer appropriate customer journeys and provide better decision-making support to our front-line staff.
Automated processes - our ability to automate processes whenever possible, enabling our people to spend more of their time helping customers. The ability to continuously monitor the performance of our processes and improve them.
Customer behaviour change - our ability to design and manage our services through continuous improvement to promote customer behaviours and improve social outcomes.
Through our current change work, we’re building some aspects of these already.
These aren’t just technology enablers. Each of them is the ability for DWP to meet business needs, so it’s made up of people with the appropriate skills and experience, the processes they follow, and the technology that supports the business outcomes. Importantly, they’re not a set-in-stone prescription, but a sketch that will evolve through iteration and learning. We can’t know everything in advance about how the Enablers will work, but we do know that they are all areas where we must have a step-change from today, if we are to realise our vision.
The Enablers are not the only things that we need to build, but they are critical ones. This aspect of the design intentionally focuses on the "mechanistic" aspects of the Enablers, and we are working on this hand-in-hand with colleagues focussed on DWP’s culture and behaviours. That means thinking beyond just skills and experience, to consider our attitudes and confidence to challenge current ways of thinking.
Our Enablers are the result of work across our community. We’ve agreed them by working across a wide group of people from all over the organisation, and we’ll continue the discussion further in the coming weeks and months.
Keep in touch by following Andrew @abesford on Twitter.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:36pm</span>
|
Jon Townsend, Head of DWP Cyber Intelligence Response Centre (D-CIRC)
We’re boosting our digital capability in our Cyber Intelligence Response Centre (D-CIRC), to ensure we have intelligence-led cyber security for our online services to operate effectively and safely.
We’re working with Government and private sector partners to build and mature our capability, detect malicious behavior, and respond to cyber threats.
DWP delivers public services that millions of people rely on, which can transform lives. People increasingly expect to access services digitally at a time which suits them, and it is only right that we transform the way we operate to design automated, agile, efficient services, putting customers’ needs first.
As we take a more digital delivery approach, we know that cyber-resilience is important to ensure the continuity of our digital services. And we know that an intelligence-led approach works best. This means that we use data from multiple internal and open source feeds to analyse and explain the threat landscape. With this picture of the cyber security threats, we can reduce risks, detect malicious behaviour and recommend appropriate response strategies.
That’s why we’re growing digital and technology skills in DWP. Our Digital Academy has trained more than 1000 of our own staff, and academy graduates have gone on to work in teams that are driving the development of digital services.
We’re also building our capability, to bring in the skills and experience we need. The market is competitive but we can give people the variety and flexibility they often look for in their digital careers, while giving us the expertise we’re looking for.
Our Cyber-Intelligence Response Centre currently has vacancies for a Cyber Intelligence Fusion Specialist and Data Insight Specialists, across a range of grades and levels of experience. Find out more about these vacancies.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:35pm</span>
|
There is a dynamic tension in today’s workplace. With the rise of activist investors, the ubiquity of technology that spreads news instantly, and increasing public scrutiny of corporate actions, business leaders are more driven than ever to aim for perfection. And yet the speed of business also forces these leaders to make immediate decisions. So, while there is often a need to make an ideal choice, there is also a need to make a choice now, even if it isn’t perfect. What to do? This is the tension between maximizing and satisficing....
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:35pm</span>
|
By Joe YankelMember of the Technical StaffCERT Cyber Security Solutions Directorate
This post is the latest installment in a series aimed at helping organizations adopt DevOps.
In our last post, DevOps and Docker, I introduced Docker as a tool to develop and deploy software applications in a controlled, isolated, flexible, and highly portable infrastructure. In this post, I am going to show you how easy it is to get started with Docker. I will dive in and demonstrate how to use Docker containers in a common software development environment by launching a database container (MongoDB), a web service container (a Python Bottle app), and configuring them to communicate forming a functional multi-container application. If you haven’t learned the basics of Docker yet, you should go ahead and try out their official tutorial here before continuing.
To get started, you need to have a virtual machine or other host that is compatible with Docker. Follow the instructions below to create the source files necessary for the demo.
For convenience, download all source files from our github repository and skip to the demo section. Our source contains a Vagrant configuration file that allows you to run the demo in an environment that will work. See our introductory post about Vagrant here.
If you would rather follow along and create all the files manually, continue with the following detailed steps.
Detailed Steps to Create Both Containers
1. Create a directory for our web service application called myapp. In this directory create the following files:
requirements.txt
Dockerfile
webservice.py
2. In requirements.txt, add the following lines to indicate which Python packages will be installed when Docker initializes the container:
bottle
pymongo
3. In Dockerfile, add the following contents:
FROM python:3-onbuild
CMD [ "python", "./webservice.py" ]
The FROM line tells Docker to pull a specific image from the Docker repository. In this case we get an official Python 3 image.
The CMD line provides the command that Docker will execute when the container starts. In this case it runs the Python web application, which we define below.
4. In webservice.py, add the following contents:
#!/usr/bin/python
from bottle import route, run, debug, default_app, response
import os
import random
from pymongo import MongoClient
# Configure DB params
db_name = 'slsdb'
# Configuration optional default development database host
default_host = 'some-development-mongo-host'
db_host = os.environ.get('MONGO_PORT_27017_TCP_ADDR', default_host)
@route('/')
def index():
"""
Default landing page. We'll initialize some MongoDB test data here.
"""
client = MongoClient(db_host)
db = client[db_name]
r = lambda: random.randint(0,255)
color = ('#%02X%02X%02X' % (r(),r(),r()))
db.colors.insert({"color":color})
return """
<p>Hello. Creating some default data everytime the page is visited.</p>
<a href="http://localhost:8000/hello">See the data!</a>
"""
@route('/hello')
def hello_world():
"""
Return the contents of the collection we created at index.
"""
client = MongoClient(db_host)
db = client[db_name]
blocks = ''
colors = [doc['color'] for doc in db.colors.find()]
for color in colors:
blocks += '<div style="width:75px; height:75px; border:1px solid;'
blocks += 'float:left; margin:1px; background-color:' + color
blocks += '">' + color + '</div>'
# Add a back link
blocks += '<div style="clear:both"><a href="http://localhost:8000">Go back.</a></div>'
return blocks
app = default_app()
debug(True)
run(host='0.0.0.0', port=8000, reloader=True)
This simple Python web service will insert some color data into the database when we browse to the root URL and deliver the data we inserted when visiting the /hello route. The important thing to take away from this example is how to connect to MongoDB using an environment variable created by Docker to derive the IP address of the MongoDB container. Docker automatically creates a number of environment variables for each container when linking to it in the format of:
<name>_PORT_<port>_<protocol>
For more details on environment variables, see the documentation.
The environment variable that we are interested in is the IP address of the MongoDB container. The IP address can change each time the container is started, so use the environment variable named MONGO_PORT_27017_TCP_ADDR from our application to connect to it. In the webservice.py file, on line 12 we set the variable db_host equal to this environment variable’s value.
db_host = os.environ.get('MONGO_PORT_27017_TCP_ADDR', default_host)
Now that we have all the files prepared, we’ll start the MongoDB container, build the web service container, and then run it linking both together.
Demo
Begin by starting in the myapp directory created earlier on the virtual machine. The rest of the tutorial will assume that you are using the Vagrant-generated virtual machine provided with the source code. If you did not use the Vagrant-generated machine, you will need to change any path below matching "/vagrant/myapp" with the full path to your equivalent myapp directory.
cd /vagrant/myapp
1. Run the MongoDB container, using the official MongoDB Docker image. This will take a moment to pull the necessary layers from the repository.
$ docker run --name mongo -d mongo
The Docker run command starts a container. In this instance we are starting a container named mongo (--name mongo). You can name a container whatever you want. The -d indicates to start the container in daemonized mode, or as a background process. Finally, the second mongo is the name of the image to run. If the image is not located locally it will attempt to pull an image named mongo from the Docker repository.
To make sure the MongoDB container is running, you can view the running docker containers by executing the docker ps command.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c1fc1ef13e1d mongo:latest "/entrypoint.sh mongod 5 seconds ago Up 5 seconds 27017/tcp mongo
2. Build the Python web service container. Include the trailing '.'
$ docker build -t webservice .
This command creates a Docker container named webservice based on the Dockerfile in the current directory.
3. Run the web service container and link it to the running MongoDB container. Expose port 8000 and make it available to the host.
Notice that we also want to mount the local source code, so we can make code changes on the fly. Use the full path of your myapp directory.
$ docker run --name webservice -p 8000:8000 -v /vagrant/myapp:/usr/src/app --link mongo:mongo -d webservice
4. Browse to http://localhost:8000/ to initialize our data.
5. Browse to http:/localhost:8000/hello or use the link on the web app home page to see the data pulled from our MongoDB container.
That is all there is to linking containers and using the environment variables that are exposed to connect the containers together in an application.
What if you want to actually get on the MongoDB console and see what is in your database?
The Docker way of doing this is to start up another Docker container based off the existing MongoDB container and connect to your running instance. There is no need to install the MongoDB client or shell tools on your own host or even guest OS. The official MongoDB image we are using already has these tools, so we just need to start a new container and override its entrypoint. The only thing we need to know is the current IP address of the running MongoDB container. To get this address we inspect the container:
$ docker inspect mongo
Near the bottom of the output we are looking for the IPAddress value in the NetworkSettings configuration.
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:10:12",
"PortMapping": null,
"Ports": {
"27017/tcp": null
}
}
In this example, the IP Address is 172.17.0.2. So, to start a MongoDB shell connecting to the MongoDB,instance, run the following:
$ docker run -i -t --name "mongoshell" --entrypoint "mongo" mongo 172.17.0.2
MongoDB shell version: 2.6.6
connecting to: 172.17.0.2/test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
>
You are now on the shell of the running MongoDB. You can now easily view the data at http://localhost:8000/. The color data is in the database named 'slsdb' in the 'colors' collection.
>use slsdb
switched to db slsdb
>show collections
colors
system.indexes
>db.colors.find()
{ "_id" : ObjectId("5485eb8a3b65a90017ea338e"), "color" : "#7DE6B6" }
{ "_id" : ObjectId("5485ebd03b65a90017ea338f"), "color" : "#1C3160" }
{ "_id" : ObjectId("5485ecfb3b65a9001a39cdce"), "color" : "#C8618C" }
{ "_id" : ObjectId("5485ee5d3b65a90026ff32d1"), "color" : "#973905" }
{ "_id" : ObjectId("5485ee5f3b65a90026ff32d2"), "color" : "#06076A" }
{ "_id" : ObjectId("5485ef643b65a9002fa9fcc6"), "color" : "#D5E272" }
{ "_id" : ObjectId("5485ef823b65a9002fa9fcc7"), "color" : "#9B459E" }
{ "_id" : ObjectId("5485ef863b65a9002fa9fcc8"), "color" : "#3C46EE" }
I hope this practical demonstration helps to get you up and running with Docker quickly. Docker certainly is changing the landscape of DevOps, especially since we can reuse many pre-built containers, such as the MongoDB container used here. This reuse minimizes the need for developers to invest time learning how to properly run or build such containers.
Additional Resources
Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
Additional Resources
To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js.
To read all the installments in our weekly DevOps series, please click here or click on the links below.
An Introduction to DevOps
A Generalized Model for Automated DevOps
A New Weekly Blog Series to Help Organizations Adopt & Implement DevOps
DevOps Enhances Software Quality
DevOps and Agile
What is DevOps?
Security in Continuous Integration
DevOps Technologies: Vagrant
DevOps and Your Organization: Where to Begin
DevOps and Docker
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:34pm</span>
|
Ryan Mallinson
I’m Ryan Mallinson and I’ve been part of the DWP Digital Academy team for 18 months. I joined the team shortly before we welcomed the first Digital Academy in Leeds.
The aim of the Digital Academy is to grow digital skills within DWP. Once graduated, the Academy students go on to work and, in some parts, form the teams driving digital developments. This is the immediate impact of the Academy. We are growing our own capability and sending people straight into the real world!
We’ve had over 1000 people ‘touched’ by the Academy so far. Around 130 people have graduated from our Foundation course with the rest attending our 1-day ‘Discover Digital’ event.
We brought around 120 graduates of the Digital Academy together on Friday 13 February to kick-start our community agenda. We want to take what we’re doing to a wider audience and collaborate with other government departments.
We had graduates from the very (noisy) first Academy cohort - people who had gone on to lead digital development as Product Managers and Delivery Managers, right through to some of our newest graduates - Sandra Berry and Sommer Croft who had graduated the day before!
Honest and thought-provoking
When planning the day, I really wanted to get the ‘real ‘ picture of what life is like post-Academy. I gave each presenter their platform and they were a little surprised when I told them that it was theirs to do with as they please. From this we were presented with some very personal stories about what people had got out of the Digital Academy, and how it fit alongside their lives, whether that was a family bereavement or the arrival of a new child in the family. I couldn’t have hoped for more engaging and honest stories.
People took to the stage and felt comfortable enough to not only talk about what worked well, but also the blockers they’d faced. It wasn’t a surprise that this included:
Governance - the frustrations around getting approval and budget to run a Discovery
Delays - the difficulties caused when a multi-disciplinary agile team has to be stood down or paused while the governance wheels turn from discovery to alpha, or alpha to beta.
Using contractors to manage teams - the question of how people’s performance is managed and improved when this is being done by contractors who might not be familiar with our people performance approach.
Dual reporting - in an agile team where stand-ups and show and tells are the way to find out what a team is doing, versus the need to fill in templates and progress reports to keep a wide and remote bunch of stakeholders up to date.
Digital Academy user needs and backlog
We had two interactive sessions where the graduates wrote the user needs and stories to improve the Digital Academy in future. Some of the themes were:
Translating Academy learning to agile projects can be challenging
IT and tools can be a blocker
Graduates need to have some support so they can hit the ground running
These themes came out again when we held a Q&A session with Kevin Cunnington, Sarah Cox and Annette Sweeney - with people asking how we can create a dynamic workforce of people with digital skills that can be deployed to discoveries or shared across different projects as and when priorities required.
What next?
We’re planning other blogs with first-hand accounts from our first cross-government Academy and we are only just dipping our toe into the cross-government digital community which is already starting to catch fire.
We hope to continue these events and will be asking you to tell us what you want them to include. This is ‘Your Community, Your Voice’ and we want to hear it.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:34pm</span>
|
By C. Aaron CoisSoftware Engineering Team Lead CERT Cyber Security Solutions Directorate
This blog post is the third in a series on DevOps, a software development approach that breaks down barriers between development and operations staff to ensure more effective, efficient software delivery.
When Agile software development models were first envisioned, a core tenet was to iterate more quickly on software changes and determine the correct path via exploration—essentially, striving to "fail fast" and iterate to correctness as a fundamental project goal. The reason for this process was a belief that developers lacked the necessary information to correctly define long-term project requirements at the onset of a project, due to an inadequate understanding of the customer and an inability to anticipate a customer’s evolving needs. Recent research supports this reasoning by continuing to highlight disconnects between planning, design, and implementation in the software development lifecycle. This blog post highlights continuous integration to avoid disconnects and mitigate risk in software development projects.
To achieve an iterative, fail-fast workflow, Agile methodologies encouraged embedding customer stakeholders full time with the development team, thereby providing in-house, real-time expertise on customer needs and requirements. In essence, Agile methodologies have created a constant, real-time feedback loop between customer subject matter experts and software development teams. In a previous post, I presented DevOps as an extension of Agile principles. Consistent with this definition, DevOps takes the real-time feedback loop concept and extends it to other points in the software development lifecycle (SDLC), mitigating risks due to disconnects between developers, quality assurance (QA), and operations staff, as well as disconnects between developers and the current state of the software.
A cornerstone of DevOps is continuous integration (CI), a technique designed and named by Grady Booch that continually merges source code updates from all developers on a team into a shared mainline. This continual merging prevents a developer’s local copy of a software project from drifting too far afield as new code is added by others, avoiding catastrophic merge conflicts. In practice, CI involves a centralized server that continually pulls in all new source code changes as developers commit them and builds the software application from scratch, notifying the team of any failures in the process. If a failure is seen, the development team is expected to refocus and fix the build before making any additional code changes. While this may seem disruptive, in practice it focuses the development team on a singular stability metric: a working automated build of the software.
Recall that a fundamental component of a DevOps approach is that to remove disconnects in understanding and influence, organizations must embed and fully engage one or more appropriate experts within the development team to enforce a domain-centric perspective. To remove the disconnect between development and sustainment, DevOps practitioners include IT operations professionals in the development team from the beginning as full team members. Likewise, to ensure software quality, QA professionals must be team members throughout the project lifecycle. In other words, DevOps takes the principles of Agile and expands their scope, recognizing that ensuring high quality development requires continual engagement and feedback from a variety of technical experts, including QA and operations specialists.
For example, continuous integration (CI) offers a real-time window into the actual state of the software system and associated quality measurements, allowing immediate and constant engagement of all team members, including operations and QA, throughout the project lifecycle. CI is a form of extreme transparency that makes sure that all project stakeholders can monitor, engage, and positively contribute to the evolving software project without disrupting the team with constant status meetings or refocusing efforts.
Due to their powerful capabilities, CI servers have evolved to perform (and therefore, verify) other important quality metrics automatically, such as running test suites and even automatically deploying applications into test environments after a successful integration. As DevOps practice matures, my expectation is that CI systems and tools will continue to evolve as a central management system for the software development process, as well as testing and integration. One of the research areas my team is exploring is ways to enhance software security by adapting effective security testing and enhancement tools to run efficiently within the constraints of CI, a topic I will explore further in the next installment of our ongoing series on DevOps.
Continuous Integration in DevOps
As I stated in the second post in this series, DevOps, in part, describes techniques for automating repetitive tasks within the software development lifecycle (SDLC), such as software builds, testing, and deployments, allowing these tasks to occur more naturally and frequently throughout the SDLC.
I oversee a software engineering team that works within the SEI’s CERT Division that focuses on research and development of solutions to cybersecurity challenges. When developers on my team write code, they test locally and then check the code into a source control repository. We focus on frequent code check-ins, to avoid complex merge problems. After code is checked in, our CI system takes control. It monitors the source code repositories for all projects and pulls an updated version of the code when it detects a new commit. If the project is written in a compiled language (we use many different languages and frameworks regularly), the server compiles and builds the new code. The CI server also runs associated unit test suites for the project. If prior steps succeed, the server runs pre-configured scripts to deploy the application to a testing environment. If any of these processes fails, the CI server fails the build and sends immediate failure notifications to the entire project team. The team’s goal is to keep the build passing at all times, so a developer who breaks the build is expected to immediately get it back on track. In this way, the CI server helps to reinforce the habit of thoroughly testing code before committing it, to avoid breaking the build and disrupting the productivity of your team members.
Without this QA process, a developer may check broken code into a central repository. Other developers may make changes that depend on this broken code, or attempt to merge new changes with it. When this happens, the team can lose control of the system’s working state, and suffer a loss in momentum when forced to revert changes from numerous developers to return to a functional state.
CI servers (also known as build servers) automatically compile, build, and test every new version of code committed to the central team repository, ensures that the entire team is alerted any time the central code repository contains broken code. This severely limits the chance for catastrophic merge issues and loss of work built upon a broken codebase. In mature operations, the CI server may also automatically deploy the tested application to a quality assurance (QA) or staging environment, ensuring the Agile dream of a consistent working version of software.
All the actions described above are performed based on automated configuration and deployment scripts written collaboratively by development and operations engineers. The collaboration is important—it ensures that operations expertise in deployment needs and best practices is represented in the development process, and that all team members understand these automated scripts and can use and enhance them. This collaboration also sets the stage for use of the same scripts to eventually deploy the system into production environments with high confidence, a process known as continuous deployment, which is a topic for a later post.
As shown in the graphic below, the build server checks out new code from source control, compiles/builds it (if necessary), and tests the code (primarily unit tests, at this stage, though static code analysis is also possible). Once the code is tested, the build server deploys it to QA. At this point, the build server can also launch scripts to perform integration testing, user interface testing, advanced security testing (more on this soon) and other tests requiring a running version of the software. Consistent with Agile requirements that emphasize a continually working version of the software, our CI server automatically reverts to the last successful version of the software, keeping a working QA system available even if integration tests failed.
With CI, when developers check in bad code, the system will automatically notify the entire team within minutes. This notification of failure can also be applied to failing functional tests, failing security tests, or failing automated deployment processes, creating an immediate feedback loop to developers to reinforce both software functionality and quality standards. This process helps the team to manage the development of complex, multi-faceted systems without losing sight of defects arising in previously completed features or overall quality. The CI process should be designed to reinforce the quality attributes most important to your system or customer:
Is security a primary concern? Configure your build server to run a comprehensive suite of security tests and fail the build if vulnerabilities are found.
Is performance a priority? Configure your build server to run automated performance tests to measure the speed of key operations, and fail the build if they are too slow (even if the operation completed successfully).
Think of continuous integration as your gatekeeper for quality control. Design your failure rules to enforce continual adherence to the quality measures that are most important to your organization and your customers.
Wrapping Up and Looking Ahead
While CI is by no means a new phenomenon, the DevOps movement underscores its importance as a foundational technique for software process automation and enforcement. There are many popular CI systems, including Jenkins, Bamboo, Teamcity, CruiseControl, Team Foundation Server, and others. The variety of systems means that any team should be able to find a tool that both meets its needs and integrates well with the technology stack(s) it employs.
For more information on this and other DevOps-related topics, every Thursday the SEI publishes a new blog post offering guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content.
Please leave feedback in the comments section below.
Additional Resources
To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:33pm</span>
|
There are many good employee engagement surveys out there (SHRM has one, for example). Most of them ask about pay, benefits, workplace culture and other external conditions that help people become engaged in their work. In my new book, Triggers: Becoming the Person You Want to BeTriggers: Becoming the Person You Want to Be, (with Mark Reiter, Crown, 2015), I suggest that we also ask another kind of question—about what employees can do to engage themselves. "Active questions," as I call them, are...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:32pm</span>
|
Fiona Speirs - Head of User Research - DWP
I’m Fiona Speirs, the Head of User Research in DWP. My aim is to embed the need to design all our digital services around our users. To do this, I’m building a User Research team in DWP. The User Research team generates insight to help enable the Department to make better decisions which take into account the needs and behaviours of our service users, while delivering the desired policy and service outcomes.
The role of the user researcher
Much has been said about designing digital services around user needs. To do this, we need to really understand our users and to build a rich picture of their attitudes and needs - backed by sound analysis, and quantitative and qualitative evidence. I’m looking for user researchers who can plan and design research programmes, generate new user evidence in creative and innovative ways, and weigh up evidence from different (often conflicting) sources. We’re involved in over a dozen live projects. You’ll work with the digital project teams to generate the feedback and insight that will help to build a clear picture of their users, and deliver solutions that they need. In essence, user researchers get people to focus on creating online services that meet real users’ needs and which are simple and intuitive to use. As a user researcher, you’ll be a natural collaborator, working with really talented designers, developers and analysts in agile teams. You’ll be excellent at managing senior stakeholders, engaging the right people in research findings, watching live research sessions, and increasing understanding of user needs.
The user research team
We’ve already got some excellent user researchers in DWP. To meet the challenge of designing better digital services in government, we want to grow our skills and the user research team. People increasingly expect to access services digitally at a time that suits them. To meet this challenge, we are transforming the way we operate to design automated, efficient services in an agile way that puts the users’ needs first. User research is at the heart of this.
User research vacancies
We have vacancies for 5 User Researchers and 2 Senior User Researchers.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:32pm</span>
|
By Todd Waits, Project Lead Cyber Security Solutions Directorate
This post is the latest installment in a series aimed at helping organizations adopt DevOps.
In the post What is DevOps?, we define one of the benefits of DevOps as "collaboration between project team roles." Conversations between team members and the platform on which communication occurs can have a profound impact on that collaboration. Poor or unused communication tools lead to miscommunication, redundant efforts, or faulty implementations. On the other hand, communication tools integrated with the development and operational infrastructures can speed up the delivery of business value to the organization. How a team structures the very infrastructure on which they communicate will directly impact their effectiveness as a team. ChatOps is a branch of DevOps focusing on the communications within the DevOps team. The ChatOps space encompasses the communication and collaboration tools within the team: notifications, chat servers, bots, issue tracking systems, etc.
In a recent blog post, Eric Sigler writes that ChatOps, a term that originated at GitHub, is all about conversation-driven development. "By bringing your tools into your conversations and using a chat bot modified to work with key plugins and scripts, teams can automate tasks and collaborate, working better, cheaper and faster," Sigler writes.
Most teams have some level of collaboration on a chat server. The chat server can act as a town square for the broader development teams, facilitating cohesion and providing a space for team members to do everything from blowing off steam with gif parties to discussing potential solutions to real problems. We want all team members on the chat server. In our team, to filter out the noise of a general chat room, we also create dedicated rooms for each project where the project team members can talk about project details that do not involve the broader team.
More than a simple medium, the chat server can be made intelligent, passing notifications from the development infrastructure to the team, and executing commands back to the infrastructure from the team. Our chat server is the hub for notifications and quick interactions with our development infrastructure. Project teams are notified through the chat server (among other methods) of any build status they care to follow: build failures, build success, timeouts, etc.
Chatbots are autonomous programs that operate and live on the chat server. We have two bots that operate on our chat servers built using the jabber-bot Ruby library. Other chat bot options are python-jabberbot, Lync chat bots, and GitHub's Hubot. Chat bots can be as simple or complex as the team needs them to be.
Our proof-of-concept "DevBot" started small by allowing us to type in commands to dynamically get a command line tip from the website commandlinefu.com. While initially frivolous, the development of this bot allowed our team to develop rapport and trust with one another. As team members realized the power of what the bot could do, it quickly morphed into a utility that would allow us to log work activities to a database, or query the status of the build server. Our bots are written in Ruby and allow us to quickly add commands and integrate with external and internal resources as allowed by policy.
After the success of our first bot, we created a second bot that allowed direct interaction with our issue tracker from the chat window. We could create cases, get a list of active cases by user, and resolve or open new cases from the chat window. Essentially, this gave us the power to contextually create cases based on the current conversation without leaving the chat window.
For example, team members are having conversations with someone and realize they need to adjust a task estimate, they do so from the chat window. They do not need to load a separate tool. They send a chat message to the bot to update an estimate of a particular task.
The code below, for example, would look for the word "estimate," a case number, and the estimate value in a chat message. When the bot sees the appropriate message, it executes a command adding the estimate to the case number supplied with the credentials of the user chatting with the bot.
def estimate(user, msg)
casenum = msg[0]
estimate = msg[1]
fb = fb_session(user)
fb.command(:edit, :ixBug => casenum, :hrsCurrEst => estimate)
end
bot.add_command(
:syntax => 'estiamte <CASENUM> <HOURS>',
:description => 'Adds entered hours as estimate to specified case.',
:is_public => true,
:regex => /^estimate\s+(.+)\s+(.+)$/
) do |from, msg|
#user = prepuser(from)
estimate(from, msg)
puts "#{from} estimated #{msg[1]} hours on case #{msg[0]}"
"#{from} estimated #{msg[1]} hours on case #{msg[0]}"
end
The chat bot can be an excellent way to on-board new members of a DevOps team. Coding and implementing new features for the bots allows new team members the opportunity to interact with various systems at a deep level very quickly. The team members get a feel for how the infrastructure and team work together, and they do it on a relatively low-risk project. By building functionality onto a chat bot, team members learn the issue tracker, version control, chat server, and build server, to name just a few. Veteran team members immediately see how the new team member adds value to the team.
Finding ways to make communication more effective and actionable will go a long way to extending DevOps capabilities of a team.
Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
Additional Resources
To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js.
To read all the installments in our DevOps series, please click here or on the individual posts below.
An Introduction to DevOps
A Generalized Model for Automated DevOps
A New Weekly Blog Series to Help Organizations Adopt & Implement DevOps
DevOps Enhances Software Quality
DevOps and Agile
What is DevOps?
Security in Continuous Integration
DevOps Technologies: Vagrant
DevOps and Your Organization: Where to Begin
DevOps and Docker
Development with Docker
Continuous Integration in DevOps
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:32pm</span>
|
Natalie Rhodes - User Researcher at DWP
Ben Holliday recently posted a Digital Academy blog called ‘Understanding the problem’. It’s an introduction to understanding the problem when you’re working with a team on a service in discovery. As well as developing a research plan to get started, he explained the importance of finding ways to record (and keep a record of) everything you’re learning. In this post, I’ll explain how to do this in more detail.
Before you start
Before you start, consider working in a pair. You’ll find it’s much easier if one person is asking questions and someone else is concentrating on capturing feedback.
Capture observations using sticky notes
One of the best tools you can use for capturing observations is the sticky note. They’re great for grouping observations later when you get to your analysis. Use these 5 tips when writing sticky notes - if you don’t want to use sticky notes the same principles apply to keeping research notes.
Write one observation per sticky note
The more information each sticky note holds, the harder it is to understand and organise later. You might miss important information by writing more than one observation or adding too much detail to a single sticky note.
Capture the thing, not your interpretation of the thing
It’s important to capture what’s happening, not what you think it means. Interpretation comes later. Right now, you want an accurate record of what people have said and done.
Make sure you know who said what
Label your observations so you can identify the people taking part in your research. An easy approach is to give each person you speak to a number, then label each observation you write down with the corresponding number.
Don’t use jargon, acronyms, or shorthand
Write observations so other people can understand them. You shouldn’t have to explain these to people if they’re clear and concise.
Make sure other people can read your handwriting - using uppercase for legibility is a good idea.
Personal data
Finally, don’t capture personal information, which could allow someone to be identified. Names, national insurance numbers, addresses, etc., shouldn’t be recorded. You need to think: "If I lost this, could someone identify who this person is?"
Making sense of your observations
Once you have got all your interviews done, the next step is to look for common themes in your observations. A good technique for this is affinity sorting.
As many people from the team as possible should be involved in the analysis stage. Involve everyone who’s been directly involved with your research.
Affinity sorting
You’ll need a big space, preferably a wall to post up your observations. Using affinity sorting, organise each sticky note into related groups. This isn’t an exact science so don’t feel that once you have put a sticky somewhere you have to leave it there - that’s the reason we use sticky notes.
To summarise:
Read your first sticky note and stick it up on the wall
Read the second sticky note. Is it related to the first observation or is it about something different?
If it’s different, then stick somewhere else on the wall.
If it’s the same, group this with your first sticky note. If you’re not sure put it somewhere close by.
Read out each observation, if this helps. Make sure everyone involved in the room is clear about what happened when the research took place.
Keep going until you’ve sorted through all your observations. This can take as long as a few hours, depending on how many observations you’ve captured during your research.
You should be able to see themes emerging from your groupings. At this stage label your groups as insights - the interpretation of what we think each group of data actually means.
Make your findings visible
It’s important to make your research as visible as possible to your team. Get a wall, a board, or a window - anywhere you can display user needs or insights from research.
There’s no hard and fast rule about how to write up your research, but concentrate on communicating key themes or insights to your team. Keep records of what you’ve learnt each time you do research so you can go back to it and review it, and compare it with all the new things you are learning.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:31pm</span>
|
By William R. NicholsSenior Member of the Technical StaffSoftware Solutions Division As software continues to grow in size and complexity, software programmers continue to make mistakes during development. These mistakes can result in defects in software products and can cause severe damage when the software goes into production. Through the Personal Software Process (PSP), the Carnegie Mellon University Software Engineering Institute has long advocated incorporating discipline and quantitative measurement into the software engineer’s initial development work to detect and eliminate defects before the product is delivered to users. This blog post presents an approach for incorporating formal methods with PSP, in particular, Verified Design by Contract, to reduce the number of defects earlier in the software development lifecycle while preserving or improving productivity.
Formal Methods and PSP
Created by Watts Humphrey, PSP incorporates process discipline and quantitative management into the software engineer’s individual development work. PSP promotes the exercise of careful procedures during all stages of development, with the aim of increasing the individual’s productivity and achieving high-quality final products. PSP emphasizes the measurement of software development work using simple measures, such as source lines of code (SLOC). These measures allow software developers to evaluate the effectiveness of their approach and answer the following questions:
How is the process working?
Is my process effective?
If it is not effective, and I need to make a change, what should I change?
What were the impacts of that change?
In essence, PSP is a scientific approach that software developers can use to evaluate their effectiveness. The Team Software Process (TSP) applies the same measurement principles in a team environment, and has been found by Capers Jones to produce best-in-class productivity and early defect removal.
Research conducted by my collaborator in this research, Diego Vallespir, director of the Postgraduate Center for Professional Development, University of the Republic Uruguay (CPAP), found that removing defects in the unit testing phase can still be expensive, costing five to seven times more than if they were removed in earlier phases of PSP. Dr. Vallespir’s research also found that 38 percent of the injected defects are still present at unit testing. In addition to myself, a team of researchers led by Vallespir along with doctoral student Silvana Moreno (Universidad de la República), and professor Álvaro Tasistro (Universidad ORT Uruguay)—theorized that opportunities existed for improvement in the early detection of defects using TSP. Our team felt that the answer might lie in formal methods. Formal methods use the same methodological strategy as PSP: emphasizing care in development procedures, as opposed to relying on testing and debugging. They can also rigorously establish that the programs produced satisfy their specifications.
Formal methods hold fast to the tenet that programs should be proven to satisfy their specifications. Proof is the mathematical activity of arriving at knowledge deductively, starting with postulated, supposed, or self-evident principles and performing successive inferences, each of which extracts a conclusion out of previously arrived-at premises.
Verified Design by Contract (VDbC) is a technique devised and patented by Bertrand Meyer for designing components of a software system by establishing their conditions of use and behavior requirements in a formal language. With VDbC, software developers metaphorically set up a contract to define certain expectations of their software.
In particular, VDbC has been proposed in the framework of object-oriented design (and specifically in the language Eiffel), and, therefore, the software components to be considered are usually classes. The corresponding specifications are pre- and post-conditions to methods, respectively establishing their terms of use and corresponding outcomes, as well as invariants of the class (i.e., conditions to be verified by every visible state of an instance of the class). In the original VDbC proposal, all specifications were written in Eiffel and are computable (i.e., they are checkable at runtime).
VDbC involves software developers, essentially, setting up a contract to define certain expectations of how the software will behave. VDbC is compatible with test-driven development, which requires defining test cases prior to developing code. This aspect of VDbC is also compatible with PSP, which has always emphasized developing some test cases early on in the software development lifecycle as part of the design process. It is important to note that developing test cases is not the sum of design, but rather the development of test cases is an aspect of design.
Our Approach
Designs are most effective if you have some way of verifying the design formally. Different types of design representations, for example, pseudo-code to represent logic or module decompositions to represent structure, support different levels formality. The leverage of additional formality comes from the rigor with which the design can be verified.
Our approach involves using the framework and instrumentation of PSP to evaluate how the designer, in this case VDbC, affects the results. The PSP script supports consistency and measurement by defining the logical sequence of steps (for example, plan, design, code, review code, unit test) that must be followed when building code. With VDbC, we added specific phases, activities, and outcome criteria to the PSP script (For more information about PSP scripts, please see Table 2 in the following SEI technical report.) to show that we can
measure how much effort has gone into the phase
set up the contract requirements
conduct verification
check that the design is complete and that the design is correct
As explained in detail in our technical report on this approach, PSP and VDbC: An Adaption of the PSP that Incorporates Verified Design by Contract, our combined approach resulted in new phases of software development and modified other phases already present in PSP. The resulting adaptation of PSP, hereafter referred to as PSPVDbC, incorporates new phases, modifies others, and adds new scripts and checklists to the infrastructure. Specifically, the phases of formal specification, formal specification review, formal specification compile, test case construct, pseudo code, pseudo code review, and proof are added.In the remainder of this post, we will present the phases of our combined approach, indicating the activities that are to be performed and the modifications introduced in the scripts with respect to the original PSP.
Planning. The activities in this phase of PSPVDbC are the same as in ordinary PSP. For example, Program Requirements ensure a precise understanding of every requirement. Size Estimate involves carrying out a conceptual design (i.e., producing a module [class] structure). Resource Estimate estimates the amount of time needed to develop the program. For this, the PROBE method (This SEI technical report includes a description of the PROBE method). is used, which employs historical records and linear regression for producing the new estimation and for measuring and improving the precision of the estimations.
Task and Schedule Planning is for long-term projects. These are subdivided into tasks, and the time is estimated for each task. This planning is unchanged in PSPVDbC. Defect Estimate Base is for estimating the number of defects injected and removed at each phase. Historical records and the estimated size of the program are utilized for performing this estimation. In PSPVDbC new records are needed to estimate the defects removed and injected at each new phase. Finally, the planning script in PSPVDbC is the same as in PSP, given that the corresponding activities are unchanged.
Design. This phase defines the data structures of the program as well as its classes and methods, interfaces, components, and interactions among all of them. In PSPVDbC, the elaboration of the pseudo code is postponed until the formal specification is available for each method.
Design Review. This phase is the same as ordinary PSP and uses its development script describing the steps to follow in the review. A sample development script is included in Table 13 of our technical report.
Test Case Construction. We want to investigate the cost effectiveness of test case construction and unit testing when formal methods are used. Problems with unit test include the cost of test case construction, maintenance of test cases, number of test cases required, and a failure to achieve comprehensive test coverage. We want to determine if it is practical to reduce or eliminate categories of test in the unit test phase when using these formal methods. To answer this, the following must be known:
cost of test case construction
cost of unit test construction
defect density entering into unit test
yield of the unit test phase
types of defects entering and escaping unit test
Formal Specification. This phase must be performed after the design review. The reason for this is that reviews are highly effective in detecting defects injected during design and need to be discovered as early as possible. In this phase, we begin to use the development environment that supports VDbC. Two activities are carried out in this phase:
Construction consists of preparing the environment and defining within it each class with its method headers.
Specification, in which we write down in the carrier language the pre- and post-conditions of each method as well as the class invariant. Note that, within the present approach, the use of formal methods begins once the design has been completed. It consists of the formal specification of the produced design and the formal proof that the final code is correct with respect to this specification.
Formal Specification Review. Using a formal language for specifying conditions is not a trivial task, and both syntactic and semantic defects can be injected. To avoid the propagation of these errors to further stages, and the resulting increase in the cost of correction, we propose a phase called formal specification review. The script that corresponds to this phase contains the following activities:
In the review activity, the sentences of the specification are inspected using a checklist.
In the correction review, all defects detected during the review are removed.
In checking, the corrections are reviewed to verify their adequacy.
Formal Specification Compile. Any development tool supporting VDbC will be able to compile the formal specification. Since this allows an early detection of errors, we consider it valuable to explicitly introduce this phase into PSPVDbC. In particular, it is worthwhile to detect all possible errors in the formal specifications before any coding is carried out. A further reason to isolate the compilation of the formal specification is to allow the time spent in this specific activity to be recorded.
Pseudo Code. The pseudo code phase allows software developers to understand and structure the solution to the specified problem just before coding. Describing the intent of the design in a programing neutral language helps the developer bridge the more abstract design to the concrete implementation. The documentation later supports peer review that the code actually implements the design. Thus, the pseudo code of each class method defined in the logic template is written down. Our approach advocates that the pseudo code be produced after the compilation of the specification in order for the specification to serve as a well understood starting point for design elaboration in pseudo code. Writing down the pseudo code just before coding allows us to follow a well-defined process in which the output of each stage is taken as input to the next one.
Pseudo Code Review. A checklist is used for guiding activity in this phase. The activity pseudo code review is added to the development script. The pseudo code review script is proposed for use in this activity. An example of the script follows:
Produce a pseudo code to meet the design.
Record the design logic specification templates.
Record defects in the defect recording log.
Record time in the time recording log.
Code, Code Review, and Code Compile. Just as in PSP, these phases consist of translating the design into a specific programming language, revising the code, and compiling it. The description of these activities in the PSPVDbC development script is the same as in the PSP development script. Proof. An addition to PSPVDbC, this phase provides evidence of the correctness of the code with respect to the formal specification (i.e., its formal proof). A computerized verifying tool is used that derives proof obligations and helps to carry out the proofs themselves.
Unit Test. This phase is the same as in PSP. This phase is relevant because it detects mismatches with respect to the original, informal requirements of the program. These defects can arise at several points during the development, particularly as conceptual or semantic errors of the formal specifications. The test cases to be executed must therefore be designed right after the requirements are established (i.e., during the phase test case construct) as indicated above.
Post Mortem. This is the same as in ordinary PSP, and its description in the PSPVDbC development script. However, several modifications must be made to the infrastructure supporting the new process. For example, all new phases must be included in the support tool to keep track of the time spent at each phase, as well as to record defects injected, detected, and removed at each phase. Our intention in this research was to present the changes in the process to incorporate VbDC. The adaptation of the supporting tools, scripts, and training courses is a matter for future work.
Conclusions and Future Work
By definition, in Design by Contract (and therefore also in PSPVDbC) the specification language is seamlessly integrated with the programming language, either because they coincide or because the specification language is a smooth extension of the programming language. As a consequence, the conditions making up the various specifications are Boolean expressions that are simple to learn and understand. We believe that this makes the approach easier to learn and use than the ones that have previously been explored.
Nonetheless, the main difficulty associated with the method resides in developing a competence in carrying out the formal proofs of the written code. This is, of course, a challenge common to any approach based on formal methods. Experience shows, however, that the available tools are generally of great help in this matter. Tools used with the Architecture Analysis and Description Language (AADL) modeling notation are used to model embedded systems and is supported by tools such as Osate and TASTE. There are reports of cases in which tools have generated the proof obligations and discharged up to 90 percent of the proofs automatically.
We conclude that it is possible, in principle, to define a new process that integrates the advantages of both PSP and formal methods, particularly VDbC.
In our future work, we will evaluate the PSPVDbC in actual practice by carrying out measurements in empirical studies. The fundamental aspect to be measured in our evaluation is the quality of the product, expressed in the number of defects injected and removed at the various stages of development. We are also interested in measures of the total cost of the development.
We welcome your feedback on our work. Please leave comments below.
Additional Resources
To read the SEI technical report on which this research is based, PSPVDbC: An Adaptation of the PSP that Incorporates Verified Design by Contract, please visithttp://www.sei.cmu.edu/reports/13tr005.pdf.
To read the book, Software Engineering Best Practices: Lessons from Successful Projects in the Top Companies by Capers Jones, please visit this url.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:31pm</span>
|
On May 20, @shrmnextchat chatted with Blake McCammon (@rblake) about Millenial Influence. In case you missed this awesome chat, you can see all the great tweets here: [View the story "#Nextchat RECAP: Millennial Influence" on Storify] ...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:30pm</span>
|
Hi, I’m Kate Bruckshaw and I’m the Product Owner of a digital project to improve the way that people repay money to DWP. I graduated from the Digital Academy in July 2014.
We’re transforming the way we operate to create automated, efficient services designed around understanding service user needs. As Andrew Besford recently wrote DWP is a huge multi-channel business, where customers (our users) depend on us for information and support. A big part of the work we’re doing to reduce fraud and error, involves collecting overpaid benefits. In 2013/14, we recovered nearly £1 billion but there’s more to do to increase this and to make it easy for people to pay back money.
Managing the recovery of overpayments can be complex. We’re dealing in debt owed to government and with people in diverse situations, sometimes with very different needs. We want to make it easier for users to deal with us so they don’t need to phone or post stuff and can just go online to complete a repayment in a way that really is simple, clear and fast.
We also want a service that works for everyone but we know how important it is to start with something that’s viable. Eric Ries promoted this as the Minimal Viable Product (MVP) and it translates well as a question: ‘what’s the simplest thing that could possibly (probably) work? This isn’t about spending months setting-up a big IT project, instead it’s about making things happen by doing things differently. Roo Reynolds' blog post is still one of the best reads on how we now create new services.
Getting to alpha
For the last few weeks, we’ve been working out what might be possible and what we can do quickly, knowing that we can continue to improve it (it’s an agile project and we’re iterating). There’s no better way to do this than by building something and sharing it with users and that’s what we’ve done, starting with paper prototypes and progressing to a ‘click through’ in PowerPoint before we code.
This kind of approach doesn’t aim for perfect design but we’ve found it’s hugely powerful: being able to go through a user journey, show someone an interface and listen to their experience has been the best way to understand what people really think of your build. Hearing someone tell you it’s ‘sweet’ is even better.
We’re aiming to complete our alpha very soon and to begin testing a fully working prototype as a beta. And that’s where you get a real sense of how transformation is starting to change DWP and of the power of collaboration.
Working in the Transformation Hub
Our project is running as part of a community, alongside other teams doing other digital projects and a rapidly increasing number of seriously talented people now based in our Leeds Transformation Hub. Things happen here. We can grab User Researchers for a pop-up session, get @BenHolliday to test out our prototype or take ten minutes with @mortimer_leigh to learn from the experience of launching our Carers Allowance Digital Service.
What’s more, we’ve been working across government to look at re-using existing components (H/T to Ollie McGuire and his team in HMRC for sharing their source code with us), using new tools (30 minutes on a Google hangout instead of hours travelling by train) and finding out how doing things differently translates into testing new approaches with our delivery partners in commercial and technology teams.
The service we’re building will be simpler, clearer and faster for people to use and we’ll have made it in a way that’s simple, clear and fast for the team.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:30pm</span>
|
By C. Aaron CoisSoftware Engineering Team Lead CERT Cyber Security Solutions DirectorateThis post is the latest installment in a series aimed at helping organizations adopt DevOps.
Regular readers of this blog will recognize a recurring theme in this series: DevOps is fundamentally about reinforcing desired quality attributes through carefully constructed organizational process, communication, and workflow. When teaching software engineering to graduate students in Carnegie Mellon University’s Heinz College, I often spend time discussing well known tech companies and their techniques for managing software engineering and sustainment. These discussions serve as valuable real-world examples for software engineering approaches and associated outcomes, and can serve as excellent case studies for DevOps practitioners. This posting will discuss one of my favorite real-world DevOps case studies: Amazon.
Amazon is one of the most prolific tech companies today. Amazon transformed itself in 2006 from an online retailer to a tech giant and pioneer in the cloud space with the release of Amazon Web Services (AWS), a widely used on-demand Infrastructure as a Service (IaaS) offering. Amazon accepted a lot of risk with AWS. By developing one of the first massive public cloud services, they accepted that many of the challenges would be unknown, and many of the solutions unproven. To learn from Amazon’s success we need to ask the right questions. What steps did Amazon take to minimize this inherently risky venture? How did Amazon engineers define their process to ensure quality?
Luckily, some insight into these questions was made available when Google engineer Steve Yegge (a former Amazon engineer) accidentally made public an internal memo outlining his impression of Google’s failings (and Amazon’s successes) at platform engineering. This memo (which Yegge has specifically allowed to remain online) outlines a specific decision that illustrates CEO Jeff Bezos’s understanding of the underlying tenets of what we now call DevOps, as well as his dedication to what I will claim are the primary quality attributes of the AWS platform: interoperability, availability, reliability, and security. According to Yegge, Jeff Bezos issued a mandate during the early development of the AWS platform, that stated, in Yegge's words:
All teams will henceforth expose their data and functionality through service interfaces.
Teams must communicate with each other through these interfaces.
There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn’t matter. Bezos doesn’t care.
All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
Anyone who doesn’t do this will be fired.
Aside from the harsh presentation, take note of what is being done here. Engineering processes are being changed; that is, engineers at Amazon now must develop web service APIs to share all data internally across the entire organization. This change is specifically designed to incentivize engineers to build for the desired level of quality. Teams will be required to build usable APIs, or they will receive complaints from other teams needing to access their data. Availability and reliability will be enforced in the same fashion. As more completely unrelated teams need to share data, APIs will be secured as a means of protecting data, reducing resource usage, auditing, and restricting access from untrusted internal clients. Keep in mind that this mandate was to all teams, not just development teams. Marketing wants some data you have collected on user statistics from the web site? Then marketing has to find a developer and use your API. You can quickly see how this created a wide array of users, use cases, user types, and scenarios of use for every team exposing any data within Amazon.
DevOps teaches us to create a process that enforces our desired quality attributes, such as requiring automated deployment of our software to succeed before the continuous integration build can be considered successful. In effect, this scenario from Amazon is an authoritarian version of DevOps thinking. By enforcing a rigorous requirement of eating (and serving!) their own dogfood to all teams within Amazon, Bezos’s engineering operation ensures that through constant and rigorous use, their APIs would become mature, robust, and hardened.
These API improvements happened organically at Amazon, without the need to issue micromanaging commands such as "All APIs within Amazon must introduce rate limit X and scale to Y concurrent requests," because teams were incentivized to continually improve their APIs to make their own working lives easier. When AWS was released a few years later, many of these same APIs comprised the public interface of the AWS platform, which was remarkably comprehensive and stable at release. This level of quality at release directly served business goals by contributing to the early adoption rates and steady increase in popularity of AWS, a platform that provided users with a comprehensive suite of powerful capabilities and immediate comfort and confidence in a stable, mature service.
Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
Additional Resources
To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js.
To read all the installments in our DevOps series, please click here or on the individual posts below.
An Introduction to DevOps
A Generalized Model for Automated DevOps
A New Weekly Blog Series to Help Organizations Adopt & Implement DevOps
DevOps Enhances Software Quality
DevOps and Agile
What is DevOps?
Security in Continuous Integration
DevOps Technologies: Vagrant
DevOps and Your Organization: Where to Begin
DevOps and Docker
Continuous Integration in DevOps
ChatOps in the DevOps Team
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:29pm</span>
|
Naomi Stanford - Organisational Design
The other week, Nic Harrison talked about "fixin' to get ready" with the momentous task of transforming DWP. So what does getting ready mean? For me, it means two things:
Stopping all the horseholding we do
Learning how to leapfrog.
Horseholding
Horseholding is hanging onto things that are standing in the way of transformation. It’s the sort of stuff that new joiners notice before they start to conform, and the sort of stuff that long servers know is meaningless and non-value add but have given up trying to battle with.
The word comes from a story about soldiers in the Second World War who stood to attention before a gun was fired thus holding up the actual firing process. When asked why they did this it took a long-retired Colonel to remember that in the days when guns were drawn by horses the soldiers had to hold the horses as the gun was fired to stop them bolting when they heard the noise. Seventy years later the standing to attention was still an unquestioned representation of this.
I’ve noticed a few DWP horseholdings
For example, I got a great email the other day. The first paragraph read ‘DWP has been asked to seek expressions of interest for around 20 facilitators …. Grade is not critical.’ The next paragraph read ‘If you are interested in this excellent development opportunity please send your details including your name, grade, job role’. Hmmm, grade isn’t critical but please tell us what yours is...? Is ‘grade’ one of the things we’re horseholding? Others are layers of governance, hard-copy documents, bureaucratic processes (dare I say the performance management system?), the language of command and control - ‘commissions’, ‘what’s the exam question?’, ‘Who’s marking the homework?’ and acting in functional areas not across the Department.
Leapfrogging
Leapfrogging is what we have to do now. Think of telecoms in many African countries. They’ve gone straight into mobile phones, mobile banking, and other mobile technologies never having had fixed landlines. We have to look similarly at where we can effectively leapfrog. Right now, for example, as we design DWP for 2020 we are moving from doing small scale redesigns to meet specific programme needs, towards digitalising services, new ways of working including Smarter Working, and creating new(ish) delivery models. But this is not enough. To get to transformation we have to leapfrog that middle ground and go for fundamentally restructuring in a way that radically changes our Departmental shape, size, operating model and partner relationships.
One lovely, albeit small, experiment we could build on this is running at Loxley House, Nottingham. I had the chance to visit and see a leapfrog in action - an integrated team made up of local authority and DWP employees, managed by a DWP team leader, working in a local authority building (the building is open 24/7), delivering a professional service that secures jobs for local unemployed people.
The many practical challenges they faced have been overcome by having a clear, shared goal and a collaborative/non-hierarchical working style.
Being even bolder we could imagine no DWP but a tiny Civil Service that is just a Secretariat, or a Civil Service which is only a policy maker or one that is entirely ‘e-enabled’. (Look at Estonia for a current leader in this field).
Leapfrogging takes nerve and energy and it can be scary. It is also exciting, challenging and (I think) absolutely essential. To really transform we have to give up horse-holding and start to leapfrog.
Think of your horseholding examples and let me know what they are.
Now consider where the leapfrog opportunities are and let me know on those too.
Email your thoughts to Naomi
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:29pm</span>
|
by Scott McMillanSenior Member of the Technical StaffSEI Emerging Technology Center
This blog post was co-authored by Eric Werner.
Graph algorithms are in wide use in Department of Defense (DoD) software
applications, including intelligence analysis, autonomous systems,
cyber intelligence and security, and logistics optimizations. In late 2013, several luminaries from the graph analytics community released a position paper calling for an open effort, now referred to as GraphBLAS, to define a standard for graph algorithms in terms of linear algebraic operations. BLAS stands for Basic Linear Algebra Subprograms and is a common library specification used in scientific computation. The authors of the position paper propose extending the National Institute of Standards and Technology’s Sparse Basic Linear Algebra Subprograms (spBLAS) library to perform graph computations. The position paper served as the latest catalyst for the ongoing research by the SEI’s Emerging Technology Center in the field of graph algorithms and heterogeneous high-performance computing (HHPC). This blog post, the second in our series, describes our efforts to create a software library of graph algorithms for heterogeneous architectures that will be released via open source.
The Opposite of an Embarrassingly Parallel Problem
In computer science, the term embarrassingly parallel problem describes a situation where the same operation or set of operations can be executed on different data simultaneously, thereby allowing the distribution of data across many computing elements without the need for communication (and/or synchronization) between the elements. The problems are relatively easy to implement on high-performance systems and can achieve excellent computing performance. High-performance computing (HPC) is now central to the federal government and many industry projects, as evidenced by the shift from single-core and multi-core (homogenous) central processing units (CPUs) to many-core and heterogeneous systems, including graphics processing units (GPUs) that are adept at solving embarrassingly parallel problems.
Unfortunately, many important problems are not embarrassingly parallel including graph algorithms. Fundamentally, graphs are data structures with neighboring nodes connected by edges. The computation to be performed on graphs often involves finding and ranking important nodes or edges, finding anomalous connection patterns, identifying tightly knit communities of nodes, etc. The irregular structure of the graphs makes the communication to computation ratio high for these algorithms—the opposite of the ratio found in embarrassingly parallel problems—and thus extremely hard to develop implementations that achieve good performance on HPC systems.We are targeting GPUs for our research not only because of their prevalence in current HPC installations (e.g., for simulating three-dimensional physics), but also because of their potential for providing an energy-efficient approach to the computations. We are investigating different approaches, including the linear algebra approach offered by the GraphBLAS effort, to enable the efficient use of GPUs and pave the way for easier development of high-performance graph algorithms.
Implementation
As detailed in our technical note, Patterns and Practices of Future Architectures, the first milestone in our research was to implement the most basic graph algorithm on GPUs: the breadth-first search (BFS), which also serves as the first algorithm included in the Graph 500, an international benchmark specifically tailored to graph algorithms that measures the rate at which computer systems traverse a graph. The benchmark for this algorithm is divided into two kernels:
graph construction (Kernel 1)
breadth-first search traversal (Kernel 2)
The Graph 500 also provides the specification and a reference implementation for generating graphs with the desired scale-free properties (using a Kronecker generator); a parallel pseudorandom number generator (PRNG); guidelines for how the kernels are invoked, timed, and validated; and procedures for collecting and reporting performance results from Kernel 2, which is used to rank the systems on the Graph 500 list.
An early accomplishment of our work was to decouple those kernels from the benchmark’s reference implementation. The resulting code is not part of either kernel and is invariant with respect to the specific BFS algorithms implemented and forms a software framework within which we develop and evaluate our BFS algorithms. The resulting framework, written in C++, allows us to directly compare different implementations, knowing that the graph properties and measurements were consistent (for a more detailed explanation of our implementation, please see our technical note).
Next, we concentrated our efforts on evaluating a number of different data structures for representing graphs. Due to the low computation-to-communication ratio, a graph’s representation in memory can significantly impact the corresponding algorithm’s performance. The following data structures and computer architectures were evaluated using systems from the ETC cluster containing up to 128 CPU cores, and dual NVIDIA GPUs:
Single CPU, List. The baseline single CPU implementation is a single-threaded, sequential traversal based on Version 2.1.4 of the Graph500 reference implementation. This baseline is the simplest implementation of BFS and results in poor performance as a result of the unpredictable memory accesses.
Single CPU, Compressed Sparse Row. Much research has already been performed and published to address the memory bottleneck in graph algorithms. One popular alternative to the list structure on CPU architectures is called the compressed sparse row (CSR), which uses memory more efficiently to represent the graph adjacencies by allowing more sequential accesses during the traversals. Using this data structure resulted in improved performance and allowed larger graphs to be manipulated.
Single GPU (CSR). The baseline GPU implementation is an in-memory CSR-based approach described in papers by Harish and Narayanan. The wavefront-based approach in this algorithm is very similar to the behavior in single CPU implementations, except the parallelism requires the graph to synchronize between each ply of the traversal to ensure the breadth-first requirement is upheld.
Multi-CPU, Combinatorial BLAS (CombBLAS). This approach is the precursor to the GraphBLAS effort mentioned in the introduction and is detailed in a research paper by Aydin Buluc and John R. Gilbert, which is an "extensive distributed-memory parallel graph library offering a small but powerful set of linear algebra primitives specifically targeting graph analytics." CombBLAS is a multi-CPU, parallel approach that we use to compare to the parallel GPU approach.
Our results for these implementations are shown in Figure 1 below. The complexity of programming these architectures is a primary concern, so the performance of the BFS traversal (Kernel 2) is plotted against the relative amounts of code required both to build the efficient data structures (Kernel 1) and implement the traversal (Kernel 2).
Figure 1. Performance BFS traversal (Kernel 2) relative to source lines of code (SLOC), a proxy for implementation complexity.
The results from the two single CPU implementations confirmed that the data structures used to represent the graph can significantly affect performance. Moreover, the CSR representation leads to an order of magnitude improvement in performance as measured in traversed edges per second (TEPS). Using the CSR data structure in the development of a single GPU traversal achieves another order of magnitude performance improvement.
The Challenge
Note that, as performance of these implementations increases, so does the amount of code required to achieve that performance (about 30 percent more for CSR, 60 percent more for CSR on GPU). Advanced data structures for graph analytics is an active area of research in its own right. Add to that the increasing complexity of emerging architectures like GPUs and the challenges are multiplied. To achieve high performance in graph algorithms on these architectures requires developers to be an expert in both. The focus of our work is therefore to find the separation of concerns between the algorithms and the underlying architectures, as shown by the dashed line in Figure 2 below.
Figure 2. The architecture of the graph algorithms library that captures the separation of concerns between graph algorithms and the complexities of the underlying hardware architecture. It is similar to the architectures of computation-heavy scientific applications that depend on highly tuned implementations of the BLAS specification.
The graph algorithm library we are developing for GPUs is aimed at achieving this goal. If we are successful, our library will hide the underlying architecture complexities in a set of highly tuned graph primitives (data structures and basic operations), allow for easier development of graph analytics code, and maximize the power of GPUs. Part of the benefit of our approach is that the graph analytics community will be able to take advantage of it once it is complete.
Enter BLAS
There is a robust effort already underway in the academic research community focused on data structures and patterns. One approach to address this challenge has been suggested by the graph analytics community. Called GraphBLAS, this effort proposes to build on various existing technologies used commonly in the high-performance scientific computing community. GraphBLAS proposes to build upon the ideas behind sparse BLAS (spBLAS) to represent graphs and a parallel framework like Message Passing Interface (MPI) to scale out to multiple CPUs. A proof-of-concept implementation of this approach, called CombBLAS, was recently released by Aydin Buluc and John R. Gilbert. We also implemented this approach on our 128-CPU system and showed (in Figure 1) that we could scale to multiple CPUs and achieve more than 30X performance improvements over the "Single CPU, CSR" approach.
We also achieved a 2X performance improvement over our GPU implementation. Just as importantly, however, we achieved this improvement with 15 percent less code by leveraging existing libraries. There exist similar technologies for GPUs and our goal is to develop a library that hides the underlying complexity of these approaches, and implements a number of key graph algorithms. We presented our work to the community at a GraphBLAS birds of a father session at the 2014 IEEE High Performance Extreme Computing Conference (HPEC).Collaborations and Future Work
Our research bridges the gap between the academic focus on fundamental graph algorithms and our focus on architecture and hardware issues. In this first phase of our work, we are collaborating with researchers at Indiana University’s Center for Research in Extreme Scale Technologies (CREST), which developed the Parallel Boost Graph Library (PBGL). In particular we are working with Dr. Andrew Lumsdaine who serves on the Graph 500 Executive Committee and is considered a world leader in graph analytics. Researchers in this lab worked with us to implement and benchmark data structures, communication mechanisms and algorithms on GPU hardware.
Dr. Lumsdaine’s team has experience in high-level languages for expressing computations on GPUs. The team created a new language, Harlan, for expressing computations of GPUs that is available via open-source. Researchers at CREST have used Harlan as a springboard to implement graph algorithms on GPUs and further explore programmability issues with these architectures. This work provided insight into graph algorithms, but also additional insights into Harlan and where it language can be extended.
We are also collaborating with Dr. Franz Franchetti, an associate professor in the Department of Electrical and Computer Engineering at Carnegie Mellon University. Dr. Franchetti is also involved in the GraphBLAS community, and is performing research in the programmability of graph algorithms on multi-CPU systems. Through our collaboration, we are hoping that multi-core CPU programming can translate to GPU programming (GPUs can be thought of as many small CPUs on a single core).
This year, we will also be collaborating with the CERT Division’s Network Situational Awareness Group (NetSA). Working with their network data, we will compare the performance of our approaches on commodity GPU hardware with results they have achieve using a supercomputer specially designed to perform graph computations at the Pittsburgh Supercomputing Center.
Many supercomputers have multiple compute nodes (CPUs) with accelerators, like GPUs, connected. For many applications, the accelerators are vastly under-utilized, often due to the complexity of the code needed to run efficiently on them. Our long-term goal is therefore to release a library that will allow our customers and the broader HPC community to more easily and more efficiently utilize them in the growing field of graph analytics.
We welcome your feedback on our research in the comments section below.
Additional Resources
To read the SEI technical note, Patterns and Practices for Future Architectures, please visit http://resources.sei.cmu.edu/library/asset-view.cfm?assetid=300618.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:29pm</span>
|
Manufacturers are facing big problems when it comes to recruiting enough top-tier factory workers, but there’s a fix few employers think about -- #workflex. Those employers who implement more flexibility on their factory floors are finding workflex is helping them attract and retain top talent. This topic will be front and center during an upcoming session at SHRM’s annual conference next month in Las Vegas entitled Attracting and Retaining Talent in Manufacturing. Why should you attend? With the aging of the manufacturing workforce, employers are looking to the next generation of employees to...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:29pm</span>
|
By Aaron VolkmannSenior Research Engineer CERT Cyber Security Solutions Directorate
This post is the latest installment in a series aimed at helping organizations adopt DevOps.
When building and delivering software, DevOps practices, such as automated testing, continuous integration, and continuous delivery, allow organizations to move more quickly by speeding the delivery of quality software features, that increase business value. Infrastructure automation tools, such as Chef, Puppet, and Ansible, allow the application of these practices to compute nodes through server provisioning using software scripts. These scripts are first-class software artifacts that benefit from source code version control, automated testing, continuous integration, and continuous delivery.
When using software to define networking, the same DevOps practices that help provision and configure compute nodes can be extended to cover provisioning and configuring the network. As Brent Salisbury points out in his blog post titled The Network Iceberg, compute nodes in today’s data centers have evolved with the help of operating system (OS) virtualization, as bare metal servers were condensed into many virtual machines running on a single physical host. Virtual network endpoints now outnumber physical network ports.
The next phase of this evolution is virtualizing the application with the help of containers. A single OS instance running a container platform such as Docker can host many application containers. Each container is a separate endpoint on the software defined network (SDN), increasing the network density. In the quest for independently testable and deployable program units, applications will be architected into a series of micro services. Application function calls that previously occurred within the same process in the OS will be called amongst separate services on separate containers, requiring network connectivity to support these interactions.
More than a decade ago at a medium-sized enterprise I consulted for, the network admins were using Excel spreadsheets to keep track of their network configuration. Today many organizations are still doing the same thing. With the ongoing explosion of network density and complexity within the virtual world, we can no longer rely on Excel spreadsheets or manual testing to manage network changes.
It is important to point out that there is not yet any single canonical technology to configure both the physical and virtual network. SocketPlane, Flannel, and Pipework are early pioneers in managing container virtual networks. SDNs will enable the network space to gain the efficiencies that the compute space gained through source control, automated testing, continuous integration, and continuous delivery.
Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
Additional Resources
To listen to the podcast, DevOps—Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit http://url.sei.cmu.edu/js.
To read all the installments in our DevOps series, please click here or on the individual posts below.
An Introduction to DevOps
A Generalized Model for Automated DevOps
A New Weekly Blog Series to Help Organizations Adopt & Implement DevOps
DevOps Enhances Software Quality
DevOps and Agile
What is DevOps?
Security in Continuous Integration
DevOps Technologies: Vagrant
DevOps and Your Organization: Where to Begin
DevOps and Docker
Continuous Integration in DevOps
ChatOps in the DevOps Team
DevOps Case Study: Amazon AWS
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:29pm</span>
|
Kevin Cunnington, Director General, Business Transformation
Happy 1st Birthday, Digital Academy!
It’s been a year since we opened the doors of the first Digital Academy in Fulham Jobcentre. Back then, we ran an 8-week course, turning out the first cohort of 12 graduates who then went on to work in key roles in digital projects.
I created the Digital Academy to grow our own capability within DWP. The Digital Academy provides learning and experience that enables graduates to work in agile digital development projects, building services to meet our users’ needs.
It isn’t all about digital though - the learning includes an overview of user research and user-centred design; the agile lifecycle; the role of the Product Manager and Delivery Manager; how to build prototype products and master agile rituals such as show and tells and stand-ups.
Students have a 1-week placement in another government department to experience agile development environments elsewhere - we’ve had students at GDS, HMRC, MOJ and DVLA.
We opened a Digital Academy in Leeds in September 2014 which increased the number of graduates. So far, 140 people have graduated from the Digital Academy. This includes people from other departments when we held our first cross-government Digital Academy in January this year.
Our 1-day ‘Discover Digital’ sessions have allowed people to get a quick overview - over 1000 people have benefited from this.
Our 100th graduate, Suzanne Butler, has talked about what she learned at the Academy and how DWP is transforming by starting with the user.
Kevin Cunnington, Annette Sweeney, Lara Stevenson
I was delighted to be at a Digital Academy community day a week ago, when we brought the graduates together to prioritise their user needs and generate a backlog of ideas for future academies.
It’s clear to me that Digital Academy graduates don’t just get the benefit of extra learning and experience - they’re inspired by working in a way that puts the user at the heart of designing services. Kate Bruckshaw’s blog about the benefit repayments service is a great example of where we’re designing around user needs.
Graduates are a network of like-minded people who can support each other and share knowledge and learning across digital projects. They are innovative, collaborative, curious and like to learn and share - a truly transformative bunch of people.
I’m really looking forward to seeing it grow and really transform how government delivers services that meet users’ needs.
Happy Birthday, Digital Academy - 1 year on and going strong.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:29pm</span>
|
It’s important for HR professionals to know and understand the ramifications of workplace violence to reference not only on the human level, but also concerning employer’s workers compensation and liability coverage for such acts. Other legal issues loom large, too, As an example, let’s say an employee, Jane, has a boyfriend, John, who the employer discovers has violent tendencies. Jane and John break up, and that ensues in domestic drama via a volley of phone calls during work hours. Jane’s co-worker, Karen, overhears a phone call whereby John threatens to come to...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:29pm</span>
|
By Kevin Fall Chief Technology Officer
The Department of Defense (DoD) and other government agencies increasingly rely on software and networked software systems. As one of over 40 federally funded research and development centers sponsored by the United States government, Carnegie Mellon University’s Software Engineering Institute (SEI) is working to help the government acquire, design, produce, and evolve software-reliant systems in an affordable and secure manner. The quality, safety, reliability, and security of software and the cyberspace it creates are major concerns for both embedded systems and enterprise systems employed for information processing tasks in health care, homeland security, intelligence, logistics, etc. Cybersecurity risks, a primary focus area of the SEI’s CERT Division, regularly appear in news media and have resulted in policy action at the highest levels of the US government (See Report to the President: Immediate Opportunities for Strengthening the Nation’s Cybersecurity ). This blog posting is the first in a series describing the SEI’s five-year technical strategic plan, which aims to equip the government with the best combination of thinking, technology, and methods to address its software and cybersecurity challenges.
Software in Government and the SEI’s Value Proposition
Software provides the DoD and other federal agencies significant flexibility in delivering advanced capabilities comparatively quickly by leveraging the enormous existing investments in the IT industry. The demand for these software-reliant advanced capabilities is growing rapidly. For example, in 2006 the F-35 Lighting II had 6,800 KLOC (thousands of line of code). According to a recent Crosstalk article that figure increased to 24,000 KLOC, much of it related to sensing, communications, and data processing.
Trends such as big data, the emergence of cloud computing, cyber-physical systems, the Internet of Things, information sharing in social networks, and autonomous robots have caused the role and importance of software and its security to expand significantly for the DoD and entire government. While incredible efficiencies can result from government adoption of commercial IT technologies, the associated risks and operational requirements are often sufficiently different to require the modification and enhancement of commercial off-the-shelf (COTS) technologies for government purposes.
The SEI works with members of government, academia, and industry to customize, develop, analyze and adapt software technologies and related methods for the measurable benefit of users. To act effectively in its role at the nexus of government, academia, and industry, the SEI maintains expertise in the following areas:
software engineering
systems engineering for software systems
cybersecurity and software assurance
computer science
applied mathematics
measurement of software systems
lifecycle management of software systems
Starting in 2014 and building on earlier work, the SEI is pursuing two primary technical focus areas:
lifecycle assurance of software-reliant systems
high performance software components for the distributed collection, processing, analysis, and dissemination of data and information, even in challenging settings where computing and communications may be limited
The remainder of this post presents an overview of the technical focus areas of the SEI. Future posts in these series will take a deeper dive into each of these focus areas, highlighting research initiatives and accomplishments in each.
Lifecycle Assurance of Software-Reliant Systems
Software behaves differently than "physics-based" systems, such as engines, airframes, and ship hulls. Understanding its complexity and risks is hard, especially for large-scale systems-of-systems composed of many components of differing origins and pedigree. Our work in this area therefore focuses on enabling the government to obtain software-based "capabilities with confidence." Confidence is multi-faceted, encompassing cost and schedule, functionality, security, monitorability and other desirable properties including the -ilities (i.e., non-functional architectural features such as extensibility, flexibility, availability, and efficiency.) Confidence also encompasses the level of assurance that individuals with conventional levels of education and training are able to effectively and safely operate software-reliant systems. To further the technical vision of capabilities with confidence, the SEI focuses on the assurance of two primary lifecycles:
the acquisition lifecycle, which includes aspects of requirements engineering, acquisition strategy selection, project management, and success measures
the software design, development, testing, and operational lifecycle, which is part of the acquisition lifecycle
Both lifecycles have evolved to favor incremental and iterative "agile" approaches. Less well developed are the procedures and tools to provide standardized evidence for assurance throughout these lifecycles, especially at the scale of mission-critical DoD systems. A primary technical strategy element for the SEI involves providing this type of assurance throughout system lifecycles by combining expertise in areas as diverse as cost estimation and malware analysis. To accomplish this, the SEI focuses on the following activities to support the DoD and other government sponsors:
acquisition and management, including quantitative methods for cost and schedule estimation, requirements based on system and software architectural properties including security, earned value assessment of functionality and assurance in conjunction with iterative/incremental development, architectural recovery of legacy systems, sustainment and remediation, and acquisition workforce education
software development, including software/system/network/protocol architecture, model-based engineering, code analysis (binary, source, and malicious), formal analysis and proofs, building assurance cases, performance analysis, software techniques for heterogeneous/novel hardware architectures, cross-domain security designs, and usable security
operations, including operational risk assessment, performance monitoring, and anomaly detection, insider threats, forensic analysis, performance analysis/scalability, simulations and exercises, continuity of operations (COOP)/event response, best uses of human-computer analyses
policy, including gap analysis, security and safety policies, technology transfer and assessment, compliance and validation, privacy considerations in data processing, leadership briefings and consultation
High Performance Software Components for the Distributed Collection, Processing, Analysis, and Dissemination of Data and Information
High performance software components refer to implemented collections of software functions that are known to perform efficiently, safely, and in a wide range of environments that are delivered with evidence indicating freedom from cybersecurity vulnerabilities. The DoD and other government agencies depend on many types of data that are amassed through a process known as TCPED—the tasking, collection, processing, exploitation and dissemination of (intelligence) data. Modern intelligence has grown far beyond the realm of closed government programs, however, and now includes commercial business intelligence, advertising, etc. Indeed, the current interest in big data, statistics and machine learning are modern instantiations of TCPED. It is well-established that software is the main driver for implementing modern analytics, advertising, scalable computing, and networks.
While commercial big data has received much attention, the DoD and other national security and emergency response organizations may need to use such capabilities in constrained environments that may lack power, communications, or other computing and communication resources. These challenges appear in the tactical setting (e.g., forward deployed operations or disaster scenarios). Many conventional commercial applications and computational frameworks perform poorly when applied in so-called disconnected, intermittent, and limited (DIL) communication environments. Bringing together assured, portable software components in support of modern TCPED in such environments is another major aspect of this SEI technical strategy element.
SEI researchers focus on the following activities to support the DoD and other government sponsors:
frameworks including programming and computation frameworks for big data and analysis (e.g., map/reduce; Spark), application programming interface (API) security and ease of use, data storage architectures and security, performance monitoring tools
networking protocols and architectures enabling the transport and access to data in tactical environments, protocol fuzzing, formal methods
edge components, including applications and libraries focusing on analytic processing of mobile and tactical environments, disconnected operations, human factors (avoiding operator overload when in stressing circumstances)
algorithms, including efficient portable graph algorithms, heterogenous high-performance computing, pattern matching, applied cryptography
Evaluation and Governance
SEI leadership works to ensure that its projects produce artifacts that are (or will ultimately be) useful to the government and do not require unknown leaps of faith. An emphasis on transitionability for research projects is accomplished by providing guidance and feedback to principal investigators (PIs) regarding stated government problems, industry trends, and potential collaborators. When PIs propose projects each year, SEI leaders ask them to indicate how their projects align to the SEI’s technical focus areas and to show consistency with the expressed R&D needs of the government. For example, the DoD has initiated an effort to communicate its R&D needs and activities in a set of ‘hard problems’ which are being addressed by 17 technical "Communities of Interest" (COI), comprising the collective effort known as Reliance 21. In addition, PIs are asked to discuss the scientifically valid methods they intend to use in demonstrating results and the degree to which they will collaborate with others.
In addition to its internal research review processes, the SEI has external governance from both its DoD sponsors and CMU’s leadership. Annually, the SEI presents its strategic technical direction and project plans to the SEI’s DoD-managed Technical Advisory Group (TAG) and Joint Advisory Committee Executive Group (JAC-EG) that report to our DoD government sponsors at the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)). Likewise, SEI leaders regularly present the SEI’s status, including its R&D activities, to the SEI’s Board of Visitors, which reports to the vice-president of research at CMU.
Wrapping Up and Looking Ahead
The SEI’s technical efforts produce artifacts including component software technologies, methods, analyses, tools and prototype systems. In addition, the SEI helps to adapt and mature the technical work of others (e.g., government basic research organizations, such as the National Science Foundation and DARPA) for broader application. Our goal is to produce measurable improvements in the security and performance of software-reliant systems through improved practices in software engineering and cybersecurity.
The SEI brings the best combination of thinking, technology and methods to the most deserving government software-related problem sets, free from conflict-of-interest. As part of CMU, the SEI has access to facilities and research talent including professors, students, and staff members. Our FFRDC status and DoD affiliation grants our technologists access to government data and knowledge of national challenges unusual for most university R&D labs.
Future posts in this series will highlight current and forthcoming initiatives in each of our technical areas that are helping to support the DoD and other federal agencies. We welcome your feedback on the technical strategic plan and vision for the SEI. Please leave feedback in the comments section below.
Additional Resources
Download the latest technical notes, papers, publications, and presentations from SEI researchers at our digital library http://resources.sei.cmu.edu/library/.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:28pm</span>
|
Naomi Stanford - Organisational Design
Forty-seven years ago (1968) the Fulton Committee published its report on Civil Service reform. I would never have known this except that a colleague reading a blog I published, ‘Horseholding or Leapfrogging’, sent me the link. The report discussed six things the Civil Service needed to reform - the balance of generalists v experts, the grading structure, its management/leadership capability, the lack of involvement with other stakeholders, the inadequate personnel policies, and authority vested at the wrong levels.
The CEO of the Civil Service, John Manzoni, gave a talk on 2 February 2015 at the Institute for Government. New to the role, he made some observations.
"Government does really hard things, and we ask very bright generalists to do them, and the blunt truth is that doesn’t always work very well."
"The system is designed in many ways to slow things down and be less accountable- the system becomes the people and vice versa"
"We need to create professions and real careers for those who wish to learn about delivery".
"We need big leaders to take accountability for big things."
"The Government is remarkably un-joined up - the future will demand a great degree of collaboration".
"The Civil Service has not taken the development of its people as seriously as the corporate world. I cannot emphasise enough the importance of taking this seriously."
I was struck by the similarity between the observations from 1968 and 2015. Manzoni acknowledges the fantastic progress that has been made over the last four years towards Civil Service reform, but he says this progress is "necessary but not sufficient" and that many before him have tried transforming the Civil Service without huge success (as the echo of 1968 in 2015 shows). I’ve seen evidence of this myself.
I’ve got a great flyer from 2007 about the ‘transformation’ that ‘lean’ techniques promise. It was a massive piece of work, although it wasn’t used to its full potential. But I could re-use the flyer just substituting the word ‘agile’ for ‘lean’. Similarly, I have a lovely 2010 brochure on ‘Managing Change’ that could be re-issued as is.
So what will it take to build the momentum for the Civil Service to meet the challenges of thefuture? Clearly it’s time for a different approach. Here’s something we in DWP are going to try. On March 16, we are running a hack aimed at finding, and planning to try out, the radical steps that will transform the Department in terms of how we operate. We’ll be sharing innovative ideas about how we can really change the way we work, act, behave, and think, and more importantly we’ll be bringing together a community of people who will come up with an action plan for how we trial, test, and model these transformative ideas.
The Fulton Committee Report was then. This is now. Our hack plans to deliver a different future so we’re not looking back in 2062 and saying more or less the same thing that we said in 1968 and saw in 2015.
DWP Digital
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:28pm</span>
|