Blogs
By Joe YankelMember of the Technical StaffCERT Cyber Security Solutions Directorate
This post is the latest installment in a series aimed at helping organizations adopt DevOps.
Since beginning our DevOps blog in November, and participating in webinars and conferences, we have received many questions that span the various facets of DevOps, including change management, security, and methodologies. This post will address some of the most frequently asked questions.
How does Change and Release Management integrate with DevOps?
First off, the DevOps culture involves embracing change. One of the core principles of DevOps, the second of the Three Ways according to Gene Kim, is to amplify feedback loops. The Second Way is the continual, iterative feedback loop allowing us to respond quickly to customer needs. This feedback loop is where change management comes into play. Often, change management occurs when things don't work the way you expected, or the customer (who can be internal or external) decides that a change should be made to increase business value. This type of change is OK. In fact you should embrace it.
Without DevOps and the encouragement of continuous integration (CI) and continuous delivery (CD), a requested change usually didn't occur until after an official release. But with CI and CD in place teams can achieve faster and more frequent releases. Our customers therefore potentially see those changes in their products earlier. These rapid updates allow them to more quickly evaluate the product and their own requirements, which in turn leads to change requests. Let's face it, nothing brings about a change to requirements quite like actually seeing and using the product.
In such an iterative environment/process how do you ensure that security requirements are always considered? Should we have a security person also in the DevOp process?
Security must be a first-class citizen throughout the DevOps processes. Here is the typical Venn diagram you see describing DevOps.
Actually, security is often overlooked, and its circle is not even pictured. When describing secure DevOps, which we preach here at the SEI’s CERT Division, the diagram should look like this
Security must always be considered and a security expert should be involved in the DevOps process from the beginning. You cannot expect a developer or an operations team member to make the necessary security decisions for a given project. If security is a concern (and these days it should always be a concern) to your business, then there is obviously room for a full-role dedicated to a primary subject matter expert in security/privacy. Your developer or operations professional should be an expert on topics such as
data privacy
intrusion detection
threat vectors
Common Vulnerabilities and Exposures (CVEs)
package security
authentication
authorization
security standards compliance
Microsoft has worked on a Security Development Lifecycle that shows how to include security into a project's lifecycle. The bottom line is that a security professional should collaborate with your DevOps team from the beginning of a project, because this individual will think of security-related items that others in the room will not, and you'll be glad for it.
Is DevOps used for Agile methodology only or can it be really useful for any kind of development life cycle?
DevOps is really an extension of Agile methodologies, but it is also more of a culture or philosophy (See the Three Ways link in the first question). You can adopt DevOps without practicing Agile methodologies since there is clearly more to DevOps than just your software development lifecycle (SDLC), but it may be harder. Agile certainly compliments DevOps with its iterative processes more than other SDLC's, such as the Waterfall model. You can still be successful without following Agile methods, but software development projects typically realize more successes with Agile practices.
Every two weeks, the SEI will publish a new blog post that offers guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
Additional Resources
To view the webinar Culture Shock: Unlocking DevOps with Collaboration and Communication with Aaron Volkmann and Todd Waits please click here.
To view the webinar What DevOps is Not! with Hasan Yasar and C. Aaron Cois, please click here.
To listen to the podcast DevOps—Transform Development and Operations for Fast, Secure Deployments featuring Gene Kim and Julia Allen, please click here.
To read all of the blog posts in our DevOps series, please click here.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:07pm</span>
|
I am on the fence about this new feature of SharePoint 2013. I guess I drank the Kool-Aid about how the ribbon was the best UI invention every, and how your average user will perform 95% of all actions in the first tab. So when I got to play around with the 2013 preview I was ehhh about the ribbon being hidden by default. I had to click on a document then click on the ribbon to get my actions for that document. Introducing an extra click for a user is a little puzzling, but I guess MS got such negative feedback about how "in your face" the ribbon was in 2010 that they had to change it in SharePoint 2013.
So what is the alternative? SharePoint 2013 gives us a new list of common actions (and preview) in their new Ellipsis feature. This is probably the answer that will your users are looking for when they come and ask you what happened to the drop down, or how do I access the actions with a single click. As you can see from the screen shot, there are only 3 actions available though (Share, Edit, and Follow).
New Ellipsis options for a word document:
What if your users demand that the ribbon shows when they click on the document? I wrote a quick little script using the new mQuery (see my future post about what mQuery is) that clicks the "Files" or "Items" tab when you click on an item.
EnsureScriptFunc("mquery.js", "m$", function() {
m$("#onetidDoclibViewTbl0 > tbody > tr").click( function() {
m$(".ms-cui-ct-first > a").click();
});
});
Before:
After:
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:07pm</span>
|
By Neil ErnstMember of the Technical Staff Software Solutions Division
In their haste to deliver software capabilities, developers sometimes engage in less-than-optimal coding practices. If not addressed, these shortcuts can ultimately yield unexpected rework costs that offset the benefits of rapid delivery. Technical debt conceptualizes the tradeoff between the short-term benefits of rapid delivery and long-term value. Taking shortcuts to expedite the delivery of features in the short term incurs technical debt, analogous to financial debt, that must be paid off later to optimize long-term success. Managing technical debt is an increasingly critical aspect of producing cost-effective, timely, and high-quality software products, especially in projects that apply agile methods. A delicate balance is needed between the desire to release new software features rapidly to satisfy users and the desire to practice sound software engineering that reduces rework. Too often, however, technical debt focuses on coding issues when a broader perspective—one that incorporates software architectural concerns—is needed. This blog post, the first in a series, highlights the findings of a recent field study to assess the state of the practice and current thinking regarding technical debt and guide the development of a technical debt timeline.
The Technical Debt Metaphor
The technical debt metaphor, first introduced by Ward Cunningham in 1992, refers to the degraded quality resulting from overly hasty delivery of software capabilities to users. As my colleague Ipek Ozkaya explained at the 2012 Agile Research Forum, "A little debt speeds up development, and can be beneficial as long as the debt is paid back promptly with a rewrite that reduces complexity and streamlines future enhancements."
At the SEI, our working definition is taken from Steve McConnell:
A design or construction approach that is expedient in the short term but that creates a technical context in which the same work will cost more to do later than it would cost to do now (including increased cost over time).
The SEI Architecture Practices team has been one of the pioneers in advancing the research agenda regarding technical debt. In addition to our ongoing research and industry work, we have also helped to organize the international Managing Technical Debt workshop series. Our early efforts have focused on providing software engineers visibility into technical debt from strategic and architectural perspectives. Our ongoing efforts focus on developing tools and practices for providing comprehensive technical debt detection and visualization for developers, architects and business stakeholders.
One question our work in this field has raised is whether there are practices that move this metaphor beyond a mere communication mechanism. The metaphor is attractive to practitioners because it communicates the idea that if quality problems are not addressed, things may get worse. Is there more to it than that?
Existing studies of technical debt have largely focused on code metrics and small surveys of developers. Practitioners currently broadly define technical debt as a "shortcut for expediency" and more specifically, bad code or inadequate refactoring. The initial definition, from Ward Cunningham, referred to the debt incurred because "first-time code" would ship with a limited understanding of the true nature of the problem. But is there more to technical debt than bad code?
This blog post reports on our survey of 1831 participants, primarily software engineers and architects working in long-lived, software-intensive projects from three large organizations and follow-up interviews of seven of those software engineers.
Approach and Demographics
We piloted and then released a survey consisting of approximately 20 questions. You can find our survey instrument here. The seven follow-up interviews took 45 minutes each. We used coding, a well-developed qualitative research technique for categorizing concepts in text, to classify open-ended answers. Some details about our survey include the following:
Respondents had on average 6 or more years’ experience (one third of whom had more than 15 years).
Roles selected included developers (42 percent), system engineers (7 percent), QA/testers (7 percent), project leads/managers (32 percent), architects (7 percent) and other (6 percent).
There were 39 separate business units represented among the three companies, covering a broad set of domains, from scientific computing, to command and control, to business information systems, to embedded software.
Most projects were web systems (24 percent) or embedded systems (31 percent).
Projects generally consisted of 10 to 20 people, although 32 percent had fewer than 9 staff (including contractors and business staff).
The systems averaged 3 to 5 years old, but a significant number (29 percent) were more than 10 years old.
The systems were typically between 100 KLOC and 1MLOC in size.
Most respondents used Scrum (33 percent) or incremental development methods (20 percent), but some were using self-admitted waterfall (15 percent) and some had no methodology (17 percent).
The remainder of this post details our three research questions, the motivations behind each, and what our team learned from the responses.
First Question: Usefulness of the Metaphor
Our first research question asked
Is there a commonly shared definition of technical debt among professional software engineers?
Our team selected this question primarily because the practice of managing technical debt is still in its infancy. Too often, software developers will approach a manager and indicate that technical debt has been incurred and ask for money to fix it. Our results confirm the widely held belief that neither developers nor their managers share a clear understanding of exactly what is meant by the metaphor and what it means for their project. The exception is a shared understanding that poor architectural choices may generate technical debt.
We asked participants to rank statements using a five-point scale from strongly disagree to strongly agree. You can see some of those statements in the figure below: 79 percent agree or strongly agree that "lack of awareness is a problem" and 71 percent that "technical debt implies dealing with both principal and interest." These responses suggest that there is widespread agreement on high-level aspects of the technical debt metaphor, including some popular financial extensions of the metaphor, such as the notion of principal, interest, and the need for payback.
Perceptions of Technical Debt
As the figure below demonstrates, some of the most commonly-occurring concepts (such as Awareness, Interest, and Time Pressure) on the open-ended questions were similarly high-level. For example, we assigned the concept Interest to the definition "extra effort in projects which is not required for purely technical reasons." These abstract concepts lack the detail for delineating the source of technical debt from the causes and consequences. Less common were answers pointing to the source such as "Code that has been incrementally developed over the years that is now so complicated …" or "bugs and crash-downs."
Coding Frequency for Open-Ended Questions
Our survey responses and follow-up interviews revealed that architecture was commonly seen as a major source of technical debt, which informed our second survey question.
Second Question: Architecture Choices
Our second research question asked
Are issues with architectural elements (such as module dependencies, external dependencies, external team dependencies, architecture decisions) among the most significant sources of technical debt?
Throughout our research into technical debt, we have seen multiple instances where the management of technical debt needed to extend beyond coding issues and focus on architecture issues. For example, one respondent mentioned that some initial hacking had resulted in the abuse of a communication protocol for diagnosis and monitoring, resulting in poor extensibility and high maintenance costs. Another common example is where less modular design in the first release due to time constraints affects subsequent releases. Additional functionality could not be added later without doing extensive refactoring. This refactoring impacted future timelines and introduced additional bugs.
This question was motivated by our previous research into technical debt, as well as the high percentage of responses to the previous question that answered with "architecture choice".
For the second research question, we asked participants to rank a randomly ordered list of 14 choices (shown in the image below) "with respect to the amount of debt (1=high, 14=low) they represent on this project." These choices reflect different possible sources, including code, requirements, and architecture, that emerged from a workshop series we have helped organize, detailed in this paper. The image below shows that architecture choice predominates here.
The image shows a stacked bar for each choice, and the total height of each bar reflects the number of survey respondents who selected that choice as either the first-, second-, or third-highest amount of debt on their project.
Our examples offer cases different from "bad code," since decisions are taken earlier and involve more strategic design. For example: "The work that we’re doing now to introduce a service layer and also building some clients using other technology is an example of, you know, decisions that could have been done at an earlier stage if we had had more time and had the funding and the resources to do them at the time instead of doing it now."
We see a similar architecture focus in our interviews. Five (of seven) participants told stories about architecture choices in the context of a heavy emphasis on fast delivery of features and limited budget. These choices were framed in terms of development varying from an important architectural decision (in the form of a pattern or application framework) that was no longer followed.
One participant offered an example of the model-view-controller pattern: "In retrospect we put messaging/communication … in the wrong place in the model view controller architecture which limited flexibility. The correct implementation would put it at the model layer (supporting communication interaction between models) rather than at the presentation layer. As a result modifying or adding new roles requires more work than it should."
While architecture choices were the greatest source of technical debt, dealing with that debt was more problematic. This problem stems from the long-life span of many of these projects and a drift from the original decisions, designs, and documentation. For example, "There were some problems in the infrastructure code where there was originally an architecture in place, but it wasn’t necessarily followed consistently. … So thought had been given to that, but in the implementation… shortcuts were taken and dependencies were not clean." One implication of this drift from original designs is a need for better monitoring of decisions and approaches.
Managing Technical Debt
Our final research question asked
Are there practices and tools for managing technical debt?
In asking this question, our team hoped to gain a greater understanding of what tools and practices organizations used in managing technical debt and whether those tools and practices were effective. A majority of the organizations that we interviewed rely primarily on code-level issue trackers.
Our survey revealed few systemic management practices with 65 percent of respondents having no defined technical debt management practice. Of the remaining respondents, 25 percent managed it at the team level. While there is not an explicit standard approach for managing technical debt, there is some management of technical debt within existing processes. For example, 60 percent of respondents track technical debt as part of risk processes or backlog grooming.
We asked about tool use, and 41 percent do not use tools for managing technical debt (26 percent have no opinion; only 16 percent thought tools were giving appropriate detail). For our question concerning who is aware of technical debt, our respondents (most of whom are developers, architects, or program managers) said executives and business managers were largely unaware (42 percent), and only 10 percent said their business managers were actively managing technical debt.
Specific technical debt tools were rarely used to manage architectural issues, owing to the complexity of configuring them or interpreting results. We collated responses to an open-ended question on tools into the most-frequently cited tool categories, seen in the figure below.
Tool Use as a Percentage of Total Answers. None and Unknown Excluded.
Issue trackers, which include tools such as Redmine, Jira, and Team Foundation Server, were the most prevalent (28 percent). After that, no tool category exceeded 11 percent, including dependency analysis (e.g., SonarQube, Understand), code rule checking (e.g., CPPCheck, Findbugs, SonarQube), and code metrics (e.g., Sloccount).
Overall Findings
Our survey revealed that software practitioners agree on the usefulness of the metaphor, notwithstanding different interpretations of what comprises technical debt in particular contexts. There is consensus on McConnell’s definition of "a design and construction approach that is expedient in the short term."
Our data and analysis strongly support that the leading sources of technical debt are architectural choices. Architectural choices and their design implications take many years to evolve and, consequently, are hard to plan and fund. It is vital to manage the drift between the preliminary understanding of the problem and the current understanding of the problem, since this drift will have important implications for the solution. This situation is what Cunningham means by "shipping first-time code is like going into debt."
Developers perceive management as unaware of technical debt issues, and they desire standard practices and tools to manage technical debt that do not currently exist.
We suggest that research in technical debt tooling focus on monitoring the gap between development and architecture, improving ongoing architecture analysis and conformance. Tooling is a necessary component of any technical debt management strategy. We are investigating use of the technical debt timeline as a way to map discovered technical debt issues to guide a management strategy. The next post in this series will explore this idea of a timeline in more detail.
Additional Resources
Portions of this post were taken from our research paper, to be presented at the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE) in September. More details on the methodology used and analyses conducted can be found there.
You can access survey materials, including the questions, here. We are interested in continuing the research, so if you would like to collaborate on a similar survey, please get in touch.
This survey is part of a wider SEI effort on technical debt, including an ongoing research effort.
For other posts in our technical debt series, please click here.
SEI
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:06pm</span>
|
For the readers who have not been tracking Microsoft’s evolution in the Web Content Management (WCM) space, the following would provide some insight:
Microsoft acquired a company called nCompass back in 2002. nCompass’s Content Management System (CMS) become the foundation of the WCM capabilities in SharePoint. Since then, Microsoft has been adding features to support WCM in SharePoint.
The launch of SharePoint 2013 preview presents an interesting opportunity for companies to consolidate all of their web infrastructure (Public facing website, Customer extranet, and intranet) to SharePoint 2013. As mentioned in my earlier article, this can provide a great of value to companies by reducing the overall management costs as well as having a focused team.
Below is a summary of some of the improvements in SharePoint 2013 to support Web Content Management:
Ability to easily copy content from Word to the Rich Text Editor
Ability to work with video content easily
Improved ability to show dynamic content from other websites
Improved image rendition support that allows you to show the same image in different sizes across the site
Improved support for multi-lingual sites
Ability to maintain content in one or more authoring areas and displaying it in the publishing areas easily
Improved Navigation capabilities that allow you to create complex menus
Ability to aggregate content easily using category pages
Friendly URLs that enable end users to navigate the site easily
Ability to use refiners and faceted navigation
Analytics and recommendations
Branding
Device Specific targeting
In the subsequent blog posts, I will be going into each of these features in greater detail.
This article is written by Niraj Tenany, President and CEO of Netwoven and a Information Management practitioner. Niraj works with large and medium sized organizations and advises them on Enterprise Content Management and Business Intelligence strategies. For additional information, please contact Niraj at ntenany@netwoven.com.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:06pm</span>
|
"When the Council puts a panel together these are the movers and shakers of all the key agencies. And to have them 25 feet away from you, and to be able to say, ‘I was in Washington and was able to hear from the director of USCIS’… That carries a credibility that’s priceless." - Thomas Barnett, JD. President Obama's executive action on immigration is bringing major changes to the U.S. employment-based immigration system. Employers cannot afford to fall behind. Learn more about these changes and how to keep your organization compliant by attending the Council...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:05pm</span>
|
With the recent acquisition of Yammer, there is a great deal of excitement in the marketplace. Yammer has been successful for quite some time. With the recent acquisition, Enterprises will be looking at it more carefully as they select their Enterprise social tool.
Netwoven consultant Surya Penmetsa will be discussing various integration approaches and issues around Yammer in this and subsequent blog posts. This blog post discusses the user sign-up process with Yammer.
Sign-Up Process:
Yammer presents a few options to sign up users:
Manual Sign up: The Network administrator Invites users to sign-up for yammer. User signs up using the Browser. The user profile page requests for limited information (see the screenshot below). While this provides an easy way to sign up individual users, it may not be very attractive for corporate users that require more information about the user.
The Network administrator has the ability to perform other administrative activities such as blocking or removing users.
Bulk-upload: Network administrators can upload a CSV file that contains user profile information. This allows the creation of new accounts and update of existing ones. The network administrator has other administrative options shown below in the screen shot.
Directory Sync: Enterprises often require such tools to work with Active Directory (AD) which has user information. Yammer’s ADSync tool allows us to synchronize your domain user profiles with yammer. You can specify exclusion rules so that you can filter users that are not supposed to be accessing the yammer network. The synchronization can be scheduled. This approach needs detailed review with your AD team, as there could be some performance implications depending on the Active Directory structure. Visit this page for more details . This option also automates removing of users if user is no longer with the company.
This article is written by Surya Penmetsa from Netwoven. Surya Penmetsa is a Principal Consultant with Netwoven. Surya specializes in the design and implementation of highly scalable solutions with SharePoint, K2, .NET, Yammer, and many other technologies. Netwoven is a professional services firm founded by ex-Microsoft employees. Netwoven specializes in the design and implementation of Enterprise Content Management, Business Intelligence, Business Process Management, Cloud Services and mobile applications. For additional information, please contact us at info@netwoven.com.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:05pm</span>
|
You are a fair employer. You try to look at all candidates equally and don’t discriminate based on demographics. You care about talent. But like it or not, stereotypes and unintentional bias impact our thought process and decisions in complex ways — ways we don’t even realize. Maybe you’ve noticed it while looking for new talent and maybe you haven’t, but gender impacts how men and women approach the job search, and how employers make hiring decisions. This infographic, compiled by MedReps.com, a job board which gives members access to the most sought after medical sales jobs and pharmaceutical...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:04pm</span>
|
In this short article, we discuss the Single Sign-on process. The creation of profiles in Yammer was discussed in an earlier article. Once profile is created, SSO can be enabled by configuring federation service on corporate network by using ADFS, PING and other products. Have the similar configuration done by Yammer side to add your Federation Service endpoint. Once both the endpoints are setup correctly, user request to yammer (Using Yammer Embed or SharePoint WebParts) will be redirected to corporate Federation Service. Local federation service authenticates the domain user and creates SAML 2.0 assertions and redirect the request to Yammer service. Visit this page for more details
Below is the logical flow diagram of communication between various components (There could be more components like Firewalls in reality):
This article is written by Surya Penmetsa from Netwoven. Surya Penmetsa is a Principal Consultant with Netwoven. Surya specializes in the design and implementation of highly scalable solutions with SharePoint, K2, .NET, Yammer, and many other technologies. Netwoven is a professional services firm founded by ex-Microsoft employees. Netwoven specializes in the design and implementation of Enterprise Content Management, Business Intelligence, Business Process Management, Cloud Services and mobile applications. For additional information, please contact us at info@netwoven.com.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:04pm</span>
|
I was poking around the new Office365 preview and started writing some custom JavaScript tweaks when I found a JavaScript class file called mQuery.js. This was confusing to me, as I had heard from some of the rumors that SharePoint 2013 was going to ship with jQuery out of the box. So I started probing into this new mQuery.js file, and low and behold it looks very similar to jQuery’s event features. Here is the mQuery.js file from their CDN, and here are some functions from the file:
addClass
css
bind
removeClass
click
So how can you make use of these functions? mQuery uses the no conflict technic that jQuery uses, so instead of your typical jQuery select $(‘#myID’).hasData(); you can use m$(‘myID’).hasData();. This way you can use mQuery you can still use all your jQuery functions that you have already written.
At the end of the day I really hope that Microsoft decides to use jQuery and publishes these custom functions as plugins. But I don’t know how realistic this is, so we might want to start using the built mQuery functions.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:03pm</span>
|
Wellness programs are on the rise in the workplace because ‘a healthy employee is a happy employee’, right? While this statement may be true- the statement ‘a healthy employee means lower company health costs’ should receive credit for increased wellness initiatives as well. Employer group health plans sometimes are a company’s largest cost, and the recent changes with the Affordable Care Act have employers rethinking their benefit packages to create a new, more compliant plan. Wellness initiatives can keep employees healthy and out of the doctor’s office, which keeps premiums lower. But don’t wellness programs cost money too? It...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:02pm</span>
|
With the availability of SharePoint 2013 Preview one can quickly see that site branding and the web developer experience have been significantly improved as part of Microsoft’s overall investment in Web Content Management(WCM).
Site branding has gone through a significant update in SharePoint 2013 Preview. If you’ve ever built custom SharePoint publishing site elements: masterpages, theme, and custom css, you know this is not the motherhood-and-apple-pie effort of traditional web development. Some tasks are simple - creating an Office theme in PowerPoint, and customizing the site banner artifacts. But the task quickly becomes testy as you work through layers of css buried in nested table markup. And last but not least, one must also possess a strong working knowledge of the ASP.NET placeholder controls of a working master page.
In all fairness every multi-purpose wcm architecture imposes strict adherence to structure as a necessity, whether SharePoint or any other wcm system. But easing the implementation effort is a bonus all around. And kudos to Microsoft for making that investment in SharePoint 2013. Below is a quick list of key improvements.
SharePoint 2013 Branding Improvements:
Page editor context editing links allow quickly making updates to Global Navigation or Quick Link navigation.
Metadata driven navigation means you can easily and dynamically build a navigation taxonomy seperate from your page title and folder architecture. It also means you don’t have to be a designer to manage the navigation structure.
Client-of-choice development tools: SharePoint Designer is no longer the only way to build a masterpage or page layout asset. The catalog and page libraries are now exposed to WebDAV clients including Windows Explorer. WebDAV in Explorer requires a quick install, and you must also run the WebClient Service on the SharePoint VM. Then you’re off. This means it will be a lot easier for traditional web developers and designers to build custom site layouts and branded master pages. This is a significant overall change that will enable broader SharePoint adoption for publishing sites.
Composed Looks Gallery replaces Office Themes. The composed look is comprised of 2 xml theme files - the Color Palette file and the Font Scheme file. The Composed theme brings together multiple styling artifacts beyond the Office Themes of 2010: the master page, color palette, background image url, and your font scheme.
And because wcm development and web development go hand-in-hand I mention below some web development and other related improvements:
Web Parts and layout Zones are DIVs in Publishing Pages. Gone are the days of many-layered tables with nested DIVs buried in a table cell. Web part content is styled using HTML, CSS and JavaScript. Another feature that will be easier for the traditionalists out there. This will also help a great deal with W3C page validation and building accessibility-friendly pages.
Content by Search Web Part is much more flexible than the Content by Query Web Part. No longer do you need to edit XSLT to traverse your Doc Set hierarchy - a personal favorite of mine. With search more natively integrated as a web part site content can be more organically personalize and sourced across multiple sites.
Clean URLs mean you can build branded name links rather than nested folder urls. This feature also makes it’s way into the catalog site which I’ll discuss in another post.
Image Renders allow for building site image standards - thumbnails, product images, event images, avatars, etc. When a user uploads an image they can makes sure their images conform to the design standards of the site from a single image file - regardless of the source image size.
SEO optimization is now built into SharePoint with support for XML sitemaps, custom SEO properties <Meta> tag description and keywords.
There are many other detailed features worth mentioning that will simplify your web design and development effort - easier drag and drop, the overall interface is less ‘clicky’ and more application-like, less paging to navigate to common edit and configuration features, and even embedded or linked YouTube content!
All told, you should find plenty of smile-inducing eye candy in this release. I’ll dig deeper into some of the specifics in upcoming posts.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:02pm</span>
|
The PowerPivot feature in SQL Server 2012 provides powerful data mashup capabilities to support self-service BI scenarios. It’s based on the xVelocity in-memory analytics engine that achieves very high performance for analytic queries by leveraging columnar storage, compression, and parallel data scanning and aggregation algorithms. PowerPivot integrates with SharePoint Server 2010 to provide a reliable platform for building the managed BI collaboration environment such as avoiding proliferation of spreadmarts, ensuring data consistency and data refresh across user-generated workbooks, and providing and monitoring usage patterns. It also takes advantage of core SharePoint capabilities such as role-based security, versioning, compliance policies, and workflows.
The official MSDN documentation for the manual steps to install SQL Server 2012 PowerPivot in an existing SharePoint 2010 farm can be found here. After installation, you can configure PowerPivot either by performing the manual steps in Central Administration as documented here, or using the "PowerPivot Configuration Tool". This article describes the PowerShell scripts needed to completely automate the installation and configuration process. In a subsequent post, we’ll discuss the "PowerPivot Configuration Tool" and some issues we encountered while trying to use this tool to validate the installation and configuration accomplished using the automated scripts described here.
1. Install SQL Server Analysis Services
The first step is to install "Analysis Services" (in-memory mode) and "Analysis Services SharePoint Integration" components on the application server(s) in SharePoint Server 2010 farm that will host Analysis Services. This is accomplished by invoking the SQL Server 2012 setup.exe in a PowerShell session as follows:
$setupPath = "<Path to SQL 2012 Bits>\setup.exe"
$command = "$setupPath /q /ACTION=Install /IAcceptSQLServerLicenseTerms /ROLE=SPI_AS_ExistingFarm /INSTANCENAME=PowerPivot /ASSVCACCOUNT=<Account for AS Service> /ASSVCPASSWORD=<Password> /ASSYSADMINACCOUNTS=$farmAdminsGroupName /ErrorReporting=1 /UpdateEnabled=0
Invoke-Expression "& $command"
Note the following parameters passed to setup.exe: /Role of "SPI_AS_ExistingFarm" is equivalent to selecting "Analysis Services" and "Analysis Services SharePoint Integration" on the graphical setup screen, /InstanceName can only be "PowerPivot", /UpdateEnabled is set to 0 so that the install does not fail on servers with no external connectivity, and /ASSysAdminAccounts specifies that the farm administrator group is also Analysis Services Administrator (you can specify multiple individual accounts using /PARAMETER="value1" "value2" "value3" format). No SQL Relational Database Engine needs to be installed. For a complete list of parameters see Install SQL Server 2012 from the Command Prompt
Once the installation is complete, the SQL log file can be examined for any errors using the following function:
function ValidatePowerPivotInstall()
{
[bool] $bReturn = $false
[string] $Constant_LogFilePath = "C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\Summary.txt"
if((Test-Path $Constant_LogFilePath))
{
$SetupLog = get-item "$Constant_LogFilePath"
$installationSuccess = $SetupLog | select-string -pattern "Installation completed successfully"
if ($installationSuccess -ne $null)
{
Write-Host "Installation not successful"
$bReturn = $false
}
else
{
$bReturn = $true
}
}
return $bReturn
}
a. Install OLE DB and XMLA libraries on Excel Server
If Excel Calculation Services and PowerPivot run on separate application servers in the farm, installation of the new SQL Server 2012 version of Analysis Services OLE DB provider (MSOLAP.5) is needed on app server(s) running Excel Calculation Services. The provider is included in SQL Server Setup, therefore explicit installation is only required if the Excel server is not a PowerPivot application server.
$setupPath = "<Path to SQL 2012 Bits>\setup.exe"
$command = "$setupPath" /q /ACTION=Install /FEATURES=Conn /IAcceptSQLServerLicenseTerms /ERRORREPORTING=1 /UpdateEnabled=0"
Invoke-Expression "& $command"
Note that the Feature parameter of "Conn" installs connectivity components.
In addition, the new OLE DB provider must be specified as a trusted data provider in Excel Service App.
[Array] $serviceApps = Get-SPServiceApplication | Where-Object { $_.TypeName -eq "Excel Service Application"}
foreach($ecsApp in $serviceApps)
{
$provider = $ecsApp | Get-SPExcelDataProvider | where {$_.ProviderId -eq "MSOLAP.5"}
if (!$provider)
{
$ecsApp | New-SPExcelDataProvider -providerId "MSOLAP.5" -ProviderType oleDb -description "Microsoft OLE DB Provider for OLAP Services 11.0"
Write-Host "Registered MSOLAP.5 as a Trusted Data Provider with Excel Service App"
}
}
b. Install ADOMD.NET on server hosting Central Administration web site
Some Central Admin reports in the PowerPivot Management Dashboard use ADOMD.NET to access data collected on PowerPivot query processing and server health in the farm. If the farm server hosting the Central Administration site does not run Excel Services or PowerPivot, installation of ADOMD.NET client library is needed for these reports to function properly.
$setupPath = "<Path to SQL 2012 Bits>\setup.exe"
$command ="$setupPath" /q /ACTION=Install /FEATURES=Conn /IAcceptSQLServerLicenseTerms /ERRORREPORTING=1 /UpdateEnabled=0"
Invoke-Expression "& $command"
2. Deploy Solutions
After Analysis Services is installed on one or more servers, the "PowerPivot Tools" directory on the server(s) (%Program Files%\Microsoft SQL Server\110\Tools\PowerPivotTools\ConfigurationTool\Resources) will contain two SharePoint solutions (.wsp files). The PoverPivotFarm.wsp is a farm level solution that adds the library templates (for PowerPivot Gallery and Data Feed libraries) and application pages, and the PowerPivotWebApp.wsp is a web application level solution that adds PowerPivot Web service to the Web-front end. The following scripts install these solutions in SharePoint and deploy the web application level solution to the desired web applications in the farm.
#Deploy Farm level solution
$sol = Get-SPSolution | Where-Object { $_.Name -eq "PowerPivotFarm.wsp" }
if (($sol -ne $null -and $sol.Deployed -eq $false)
{
Install-SPSolution -Identity "PowerPivotFarm.wsp" -GacDeployment -Force -Confirm:$false
}
WaitForSolutionDeployment "PowerPivotFarm.wsp" $true
#Deploy Web App level solution
$webAppUrl = (Get-SPWebApplication -IncludeCentralAdministration | Where { $_.DisplayName -eq "<web app name>"}).Url
Install-SPSolution -Identity "PowerPivotWebApp.wsp" -WebApplication $ webAppUrl -GacDeployment -Force -Confirm:$false
WaitForSolutionDeployment "PowerPivotWebApp.wsp" $true $ webAppUrl
The web application must use Windows classic mode authentication and not claims authentication to support PowerPivot. The WaitForSolutionDeployment function is borrowed from the PowerPivot Configuration Tool resources script (ConfigurePowerPivot.ps1 in the "PowerPivot Tools" directory mentioned above) and modified as follows:
# This method will be used to wait for the timer job deploying or retracting a solution
# to finish. The parameter $deploy is a bool that indicates if this is a deployment or a
# retraction
function WaitForSolutionDeployment
{
param($solutionName, $deploy, $webApplication)
$solution = Get-SPSolution $solutionName -ErrorAction:SilentlyContinue
$count = 0
while(!$solution -and $count -lt 10)
{
"PowerPivot Solution is not added to farm yet. Wait 3 seconds and check again."
Start-Sleep -s 3
($count)++
$solution = Get-SPSolution $solutionName -ErrorAction:SilentlyContinue
}
if(!$solution)
{
"PowerPivot solution does not exist in the farm"
return
}
"Found solution " + $solutionName
$activeServers = @($solution.Farm.Servers | where {$_.Status -eq "Online"})
$serversInFarm = $activeServers.Count
## Wait for the job to start
if (!$solution.JobExists)
{
"Timer job not yet running"
$count = 0;
## We will wait up to 90 seconds per server to start the job
$cyclesToWait = 30 * $serversInFarm;
while (!$solution.JobExists -and $count -lt $cyclesToWait)
{
Start-Sleep -s 3
($count)++;
}
## If after that time timer still doesn't exist, verify if it suceeded
if (!$solution.JobExists)
{
if ($deploy -xor $solution.Deployed)
{
"Timer job did not start"
# throw new Exception(Strings.ASSPIGeminiSolutionNoDeployed_Exception);
return
}
else
{
"Timer job already finished"
return
}
}
}
else
{
"Timer job already started"
}
if($deploy)
{
$deployText = "deployed"
}
else
{
$deployText = "retracted"
}
## If deploy action and solution not deployed yet or retract and solution still deployed
[bool]$status = CheckIfDeployed $solution.Name $webApplication
if (((!$solution.ContainsWebApplicationResource -and
($deploy -xor $solution.Deployed)) -or
($solution.ContainsWebApplicationResource -and
($deploy -xor ($status)))))
{
"Solution not yet " + $deployText
$count = 0
## We will wait up to 10 minutes per server
$cyclesToWait = 200 * $serversInFarm;
# We enter this cycle if solution is not yet deployed or retracted
$status = CheckIfDeployed $solution.Name $webApplication
while (((!$solution.ContainsWebApplicationResource -and
($deploy -xor $solution.Deployed)) -or
($solution.ContainsWebApplicationResource -and
($deploy -xor ($status)))) -and
$count -lt $cyclesToWait)
{
write-host "." -nonewline
Start-Sleep -s 3
($count)++
$status = CheckIfDeployed $solution.Name $webApplication
## Check every 3 minutes to see if job is aborted or failed
## Application still not deployed/retracted and job not running mean something is wrong
## We can't check geminSolution.JobStatus because it throws in the absence of a job.
if (($count % 60 -eq 0) -and ($deploy -xor $solution.Deployed) -and !$solution.JobExists)
{
"We waited " + $count + " seconds for the solution to be " + $deployText + ". However, the PowerPivot solution is not yet " + $deployText + ". Please check whether SharePoint timer job is enabled. "
break
}
}
}
else
{
"Solution already " + $deployText
}
Start-Sleep -s 15
## Check if solution wasn't successfully deployed/retracted
$status = CheckIfDeployed $solution.Name $webApplication
if (((!$solution.ContainsWebApplicationResource -and
($deploy -xor $solution.Deployed)) -or
($solution.ContainsWebApplicationResource -and
($deploy -xor ($status)))))
{
"We waited " + $count + " seconds for the solution to be " + $deployText + ". However, the PowerPivot solution is not yet " + $deployText + ". Please check whether SharePoint timer job is enabled. "
throw "Solution failed to " + $deployText + ", reason: " + ($solution.LastOperationDetails) + " at: " + ($solution.LastOperationEndTime)
}
"PowerPivot solution is successfully " + $deployText
}
function CheckIfDeployed($solutionName, $webAppName)
{
$solution = Get-SPSolution $solutionName -ErrorAction:SilentlyContinue
$webApp = Get-SPWebApplication $webAppName
foreach ($deployedWebApp in $solution.DeployedWebApplications)
{
if ($deployedWebApp.Id -eq $webApp.Id)
{
return $true
}
}
#write-host "$solutionName not deployed to $webAppName"
return $false;
}
The deployment of these solutions makes three features available in the farm, which can be installed as follows:
try
{
Install-SPFeature -path PowerPivot -force -Confirm:$false
Install-SPFeature -path PowerPivotAdmin -force -Confirm:$false
Install-SPFeature -path PowerPivotSite -force -Confirm:$false
}
catch
{
write-host "Error: $_"
}
In addition, activation of PowerPivotSite feature at the site collection level is necessary to make application pages and templates available to specific sites
Enable-SPFeature -identity "PowerPivotSite" -URL "<site collection URL>"
3. Register Engine and System Services
This step is critical in making PowerPivot functionality available in the SharePoint farm, but is missing from the list of manual steps on official MSDN documentation. Recall from the PowerPivot for SharePoint Architecture that an instance of the PowerPivot System Service exists on each server in the farm running Analysis Services in-memory engine, performing important functions (like monitoring server health, coordinating client requests for load balancing, collecting usage data, and performing automatic data refresh for PowerPivot workbooks). The PowerPivot system service works with Excel Services in SharePoint 2010 to extract the database from the Excel workbook, select an appropriate SharePoint application server running the Analysis Services service (preferably one that may already have the data loaded into memory), and then attaches the database to the Analysis Services instance.
To facilitate the above operation, all servers must have their instances of Analysis Services ("Engine" service) and PowerPivot "System" Service registered with the farm. The deployment of solutions and activation of features in the Step 2 simply registered the engine service and system service as a farm wide service, but no instances of these services have been registered in the farm yet. While creation of a PowerPivot Service Application will succeed at this point, there will be no Analysis Service instances available in the farm to serve a request. In Central Administration, in System Settings, click "Manage services on servers", select the farm server(s) where Analysis Services was installed in Step 1 and verify that "SQL Server Analysis Services" or "SQL Server PowerPivot System Service" are not available to be started. Then run the following scripts on each of these servers to register their local instances of Engine and System service in the farm
$service = Get-PowerPivotEngineService
if ($service -eq $null)
{
Write-host "The PowerPivot Engine Parent Service is not registered in the farm: $_"
#installation error, do not proceed
}
$service = Get-PowerPivotSystemService
if ($service -eq $null)
{
Write-host "The PowerPivot System Parent Service is not registered in the farm: $_"
#installation error, do not proceed
}
try
{
$serviceinstance = Get-SPServiceInstance -Server "$ENV:COMPUTERNAME" | where { $_.TypeName -like "*Analysis Services*" } #Get-PowerPivotEngineServiceInstance
if ($serviceinstance -eq $null)
{
New-PowerPivotEngineServiceInstance -Provision:$true
Write-host "New PowerPivot Engine Service Instance created on "$ENV:COMPUTERNAME"
}
}
catch
{
Write-Host "The PowerPivot Engine Service Instance not created : $_"
}
try
{
$serviceinstance = Get-SPServiceInstance -Server "$ENV:COMPUTERNAME" | where { $_.TypeName -like "*PowerPivot*" } #Get-PowerPivotSystemServiceInstance
if ($serviceinstance -eq $null)
{
New-PowerPivotSystemServiceInstance -Provision:$true
Write-host "New PowerPivot System Service Instance created on "$ENV:COMPUTERNAME "
}
}
catch
{
Write-Host "The PowerPivot System Service Instance not created : $_"
}
After running this script, verify that in Central Administration, "SQL Server Analysis Services" and "SQL Server PowerPivot System Service" are now available to be started on the server(s) where the instances were registered, and start them (if not already started)
4. Create Service Application
To make the service available to clients, a PowerPivot service application (a shared service instance of the PowerPivot System Service) needs to be created in the farm. Important service parameters such as Connection Pool Timeout, and Unattended Data Refresh Account can be specified for the Service Application. This is accomplished by running the following script:
$serviceAppName = "<name>"
$dbServer = "<server name>"
$dbName = "<DB name>"
$saPowerPivot = New-PowerPivotServiceApplication -ServiceApplicationName $serviceAppName -DatabaseServerName $dbServer -DatabaseName $dbName -AddtoDefaultProxyGroup:$true
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:00pm</span>
|
If you’re in the human resources profession (or plan to be), you will most likely experience at least one—if not several—HR technology implementations throughout your career. Technology implantations affect every part of HR, not only the human resource information system managers, and they affect every part of the organization. If you’ve never experienced an HR technology implementation, you’re in for an exciting challenge. Humans are generally resistant to change. Carefully preparing for each phase of the implementation will help result in a positive experience for the entire organization. The implementation period begins immediately after the...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 01:00pm</span>
|
I've been a youth sports coach for several years. I've coached a number of successful teams: sometimes we win sometimes we lose. Wavering focus and the direction in which the ball bounces are unpredictable. What is always predictable is that the sidelines/bleachers will be filled with people who have not volunteered their time to help but are full of opinions as to how to manage the game. The easiest thing to do is to never devote yourself to anything but pretend you know everything. Why commit yourself to trying and failing if in your single opinion...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:59pm</span>
|
Had the pleasure to attend an interesting Yammer/Forrester webinar the other day and thought I would summarize the talking points I heard about social for the enterprise:
Defining business value for a social enterprise effort is easier than other knowledge manager efforts. This is true, I go back to when I was helping directors come up with ROI slides on SharePoint as a portal and I thought wow this was hard. The business value for a team of sales people being able to communicate about a deal is a way easier story sell then people being able to share a document in under a minute.
Video is the new killer content. This one was an old one for me, I remember people saying that when YouTube was taking off. I want to believe that it is because in the consumer world it is so cool to watch videos of my friends or family is really cool. But do we really want to see a video of the head of marketing recapping our achievements.
Social cloud platforms have higher user adoption. I know it is bias, being a Yammer joint webinar, but I see it. Think of these cloud platforms, there aren’t hoops to jump through, they are always up to date, not a 2 year old instance of an on-prem server that IT forgot to update.
The sales department is the place to start our company’s social adoption. The sales folks are usually quicker to pick up things that will help them close deals, and they have smart phones and lots of time to check newfeeds and respond to things. I would love to see the study of which department’s employees spend the most time on Facebook and other consumer social sites, I bet it would be the sales folks.
The second half was how Booz Allen, Hamilton uses Yammer. It was ok, but it didn’t resonate with me. I hope this helps give you some fresh ideas.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:59pm</span>
|
I attended an HR conference last month where I heard some of the savviest business speak ever. Speakers from all over the globe discussed "leveraging," "ROI," "streamlining," "interfacing," "synergy," and "big data" more eloquently than ever imagined. I’ll be honest though. By the time I got to the fifth speaker, I was snoring because my head could not make anything palatable out of this jargon salad. Everywhere I looked someone was tossing business-speak land mines as if they were trying to make my head explode. The worst part is hearing people use terms like "market-driven frictionless acquisition" or "resource-leveling...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:59pm</span>
|
This weekend I put together a list of vocabulary that Netwoven uses when engaging in Enterprise Social projects and opportunities. This is by all means not a complete set, but it starts to help people understand what you are talking about.
Also stay tuned for our SharePoint 2013 specific vocab list.
@Mention: a way of identifying a person in a post that lets the person know they were mentioned, usually by an @ symbol in front of the name, but Google Plus uses a + instead.
90-9-1 rule: 90% of users are "lurkers" that read and observe but don’t contribute, 9% of users contribute only occasionally, and 1% of users make the majority of postings in any one network
AstroTurfing: A tactic used by some to create a fake grassroots movement or buzz
Authenticity: Used to describe "real" people behind blog posts and other social profiles.
Avatar: An avatar is a name or image that represents a person on forums, social networks, and other websites. Usually a small picture or unique username.
Blog: A site updated frequently by an individual or group to record opinions or information.
Crowdsourcing: to harnessing the skills and enthusiasm of the crowd to contribute content and solve problems
Digg: A social news website that lets members vote for their favorite articles.
DM: A Direct Message is a private message from one person to another that others on Twitter cannot see.
Feed: In the social media world it applies to areas where information from your social network gathers and is presented.
Forum: also known as a message board, a forum is a site dedicated to discussion.
Geotagging: Geotags are location-based tags attached to status updates, media, or other posts that gives GPS information
Hashtag: The # symbol, called a hashtag, is used to mark keywords or topics in a Tweet or post. Identical hashtags are then grouped into a search thread as a way to categorize messages.
Meme: A means of taking viral concepts and making them everyday lingo.
Microblogging: Short message postings from a social media account. Facebook statuses and Twitter posts are two examples.
Online reputation management: ORM is the act of monitoring the social spaces for mentions of a company or person, often done by tools or applications that aggregate the networks.
Opengraph: This is an open protocol that Yammer, Facebook and other social networks use to enables a web page to become a rich object in a social graph.
Tag: Indicates or labels what content is about.
Trending: A word, phrase or topic that is popular a given moment.
Tweeps: Twitter + People = Tweople.
Viral: Anything shared across social networks that get passed along rapidly. YouTube videos are a great example
Wiki: Simple web pages that can be edited by other users.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:59pm</span>
|
On June 10, @shrmnextchat chatted with @williamtincup and @johnsumser about Perfecting the HR Tech Implementation. In case you missed this informative chat that was filled with great tips and advice, you can read all the tweets here: [View the story "#Nextchat RECAP: Perfecting the HR Tech Implementation" on Storify] ...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:59pm</span>
|
Often there is a need to customize the content processing pipeline to meet certain business requirements. There are various approaches to this that are discussed in this blog.
The figure below is showing the logical overview of how crawling and content processing works for SharePoint 2013 Enterprise search.
Now if requirement is such that the managed properties of crawled items need to be modified before being indexed then customized business logic needs to be implemented somewhere in the content processing pipeline. The only place where SharePoint 2013 allows us to call external SOAP services (wcf and web services) is during "Content Enrichment".
In this article, 2 cases will be discussed:
Case 1:
"Calcutta" is a city in India, recently renamed as "Kolkata". Now some people may search with "Kolkata" and others with "Calcutta". Here the "Location" property can be modified to "Kolkata" whenever "Calcutta" entry is found against "Location". For this the following steps are needed:
Create a wcf application in Visual Studio 2012 and add a reference to "Microsoft.office.server.search.contentprocessingenrichment.dll" which you can find in "c:\\program files\Microsoft office servers\15.\Search\Application\External".
Delete default interface (e.g. IService1)
Add following references to Service1.svc.cs file
Microsoft.office.server.search.contentprocessingenrichment
Microsoft.office.server.search.contentprocessingenrichment.PropertyTypes
Inherit "Icontentporcessingenrichmentservice" in Service1.svc.cs file
Implement the method "ProcessItem". This is the method where you get required properties for each items.
Add following in <system.servicemodel> in web.config file<bindings><basicHttpBinding><!- The service will accept a maximum blob of 8 MB. -><binding maxReceivedMessageSize = "8388608″>
<readerQuotas maxDepth="32″
maxStringContentLength="2147483647″
maxArrayLength="2147483647″
maxBytesPerRead="2147483647″
maxNameTableCharCount="2147483647″ />
<security mode="None" />
</binding>
</basicHttpBinding>
</bindings>
Host this wcf to IIS (Create a virtual directory). Map this to the physical path of wcf application. Right click on the Virtual Directory and click on "Convert to Application"
Browse and get the url for hosted .svc file
Execute following PowerShell script to map "Content Enrichment" to hosted custom wcf$ssa = Get-SPEnterpriseSearchServiceApplication$config = New-SPEnterpriseSearchContentEnrichmentConfiguration$config.Endpoint = http://Site_URL/<service name>.svc$config.InputProperties = "Location"
$config.OutputProperties = "Location"
$config.SendRawData = $True
$config.MaxRawDataSize = 8192
Set-SPEnterpriseSearchContentEnrichmentConfiguration -SearchApplication
$ssa -ContentEnrichmentConfiguration $config
Run a full crawl on the content source.
Case 2:
In case of more than one content source, one can be configured for "Advanced" and the other for "Intermediate" resources. Here "Author" property of one can be modified to "Advanced" and the other to "Intermediate" so that when users search using "Advanced" key word they can see files from first content source and vice versa.
Here the two different content sources are segregated first and then processed differently. That is why there is a need of a "WCF Router" to identify the content source and map to wcfs accordingly.
Create two wcfs as directed in Case 1
Create a WCF Application and open web.config and configure as described below :
Here "basicHttpBinding" is being used<basicHttpBinding><bindingname="basicHttpBinding_IContentProcessingEnrichmentService"maxReceivedMessageSize = "8388608″>
<readerQuotas
maxDepth="32″
maxStringContentLength="2147483647″
maxArrayLength="2147483647″
maxBytesPerRead="2147483647″
maxNameTableCharCount="2147483647″ />
<security mode="None" />
</binding>
</basicHttpBinding>
Then configure Services section (base address not required if service is hosted in IIS)<services><service behaviorConfiguration="RoutingServiceBehavior" name="System.ServiceModel.Routing.RoutingService"><endpoint name="RoutingServiceEndpoint" address="" binding="basicHttpBinding" bindingConfiguration="basicHttpBinding_IContentProcessingEnrichmentService" contract="System.ServiceModel.Routing.IRequestReplyRouter"/>
</service>
A service behavior needs to be created where the name of the filter table is referenced. This will be defined in the next step. To enable full inspection of the SOAP envelopes in the XPath filters, the attribute "routeOnHeadersOnly" is set to false.<behavior name="RoutingServiceBehavior"><routingfilterTableName="ContentSourceFilters"
routeOnHeadersOnly="False"/>
</behavior>
Now configure client wcfs<client><endpoint name="ContentProcessingEnrichmentService" address="http://localhost:300/ContentProcessingEnrichmentService/Service1.svc” binding="basicHttpBinding" bindingConfiguration="basicHttpBinding_IContentProcessingEnrichmentService" contract="*"/><endpoint name="ContentProcessingEnrichmentServiceDB" address="http://localhost:300/ContentProcessingEnrichmentServiceDB/Service1.svc” binding="basicHttpBinding" bindingConfiguration="basicHttpBinding_IContentProcessingEnrichmentService" contract="*"/>
</client>
Now "Routing" will be configured i.e. the mapping section between wcf and contentsource. Here the filters and the filter table are defined to map the filters to normal endpoints (and optionally to backup endpoints). Here "Xpath" filtertype is used as e XPath expressions look for all Property nodes in the SOAP envelope.<routing><namespaceTable><add prefix="cc" namespace="http://schemas.microsoft.com/office/server/search/contentprocessing/2012/01/ContentProcessingEnrichment"/>
</namespaceTable>
<filters>
<filter name="Sharepoint" filterType="XPath" filterData="//cc:Property[cc:Name[. = 'ContentSource'] and cc:Value[. = 'Local SharePoint sites']]"/>
<filter name="WCMContent" filterType="XPath" filterData="//cc:Property[cc:Name[. = 'ContentSource'] and cc:Value[. = 'WCM']]"/>
</filters>
<filterTables>
<filterTable name="ContentSourceFilters">
<add filterName="Sharepoint" endpointName="ContentProcessingEnrichmentService"/>
<add filterName="WCMContent" endpointName="ContentProcessingEnrichmentServiceDB"/>
</filterTable>
</filterTables>
</routing>
Build solution and host to IIS as case 1
Execute following PS commands to integrate routing.svc (router service) to content enrichment$ssa = Get-SPEnterpriseSearchServiceApplication$config = New-SPEnterpriseSearchContentEnrichmentConfiguration$config.Endpoint = "http://localhost:300/Router/Router.svc”
$config.InputProperties = "Author", "Filename", "ContentSource"
$config.OutputProperties = "Author"
$config.SendRawData = $True
$config.MaxRawDataSize = 8192
Set-SPEnterpriseSearchContentEnrichmentConfiguration -SearchApplication $ssa -ContentEnrichmentConfiguration $config
Run a full crawl on both content sources.
Hope you find this article interesting and helpful. Let us know in the comments below if you have any questions.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:59pm</span>
|
Posted on the Delaware Employment blog by Molly DiBianca by William W. Bowser ...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:58pm</span>
|
K2 offers Management Work list web part and Work list web part for managing work lists. However, there are situations when the web parts need to be customized to meet additional business requirements. Here are some use cases that may require the customization of the available K2 web parts:
An application has several SharePoint sub sites where information is managed for project sub-teams. The teams should only see the process instances and work list items associated with them. The K2 Work list web part does not provide a comprehensive filtering scheme to filter process instances or tasks based on business data stored in process data fields.
Project managers responsible for a project should be able to see all tasks for that project. The management Work list web part does not provide a way to filter tasks by process instances; when a user has Admin rights to a process, it lists all tasks for all instances of the process regardless of the project. This will be a topic of our next blog.
An application has several SharePoint sub sites where information is managed for project sub-teams. The teams should only see the process instances and work list items associated with them. The K2 Work list web part does not provide a comprehensive filtering scheme to filter process instances or tasks based on business data stored in process data fields.
Project managers responsible for a project should be able to see all tasks for that project. The management Work list web part does not provide a way to filter tasks by process instances; when a user has Admin rights to a process, it lists all tasks for all instances of the process regardless of the project.
The K2 Worklist web part provides a mechanism to filter tasks, but that’s limited to out-of -box fields such as Folio, Originator etc. Often there is a need for additional filtering based on custom process XML fields (example: Project-Id). The source code for the K2 Worklist webpart is available on K2 Underground. A relatively straightforward modification can be made to accommodate the additional filtering needs. This avoids re-inventing the wheel and building something from scratch while providing consistent user experience in line with other out of box K2 webparts.
Following is the structure of K2 Worklist web part in Visual Studio solution explorer. The code is fairly complex and contains a lot of functionality. However, only the two highlighted .cs files need modifications:
Tasklistcontrolpage.cs
Tasklistwebpartfactory.cs
Execution starts in Tasklistwebpartfactory.cs file that defines the TaskListWebPartFactory class which is the actual webpart class derived from ASP.NET webpart class.
Public class TaskListWebPartFactory : System.Web.UI.WebControls.WebParts.WebPart, IWebEditable.
This TaskListWebPartFactory class is a container that loads an instance of TaskListControlPage class
The render() method of the TaskListControlPage class calls the ApplyFilter() method which actually sets the filters for the Worklist items.
We let the ApplyFilter() method do its job of setting the default filters (set by end user using Edit webpart properties page) and then applied our additional filtering on top. This is accomplished by adding a call to AddCustomFilter() method that we created. This custom method added desired filter(s) to the TaskListControlPage class’s CurrentConfiguration property mimicking how the out-of-box ApplyFilter() method handles user defined filters. Following is the code snippet.
private
void AddCustomFilter() {
if (!string.IsNullOrEmpty(_projectIdValue)) {
CurrentConfiguration.Criteria.AddFilterField(SourceCode.Workflow.Client.WCLogical.And,
SourceCode.Workflow.Client.WCField.ProcessData,
"<Process Field Name>", SourceCode.Workflow.Client.WCCompare.Equal,
"<Process Field Value>");
}
}
The <Process Field Name> in the above code needs to be replaced by the name of the process data field that holds the project name and <Process Field Value> replaced by the project name to filter on. In our implementation, the project name was stored in the SharePoint property bag for the site, and the K2 process was designed to grab that value from the context and save it in a process data field (specified in <Process Field Name>) when a process instance is launched. The web part can access that project name to filter on from the containing site’s property bag and use that in place of <Process Field Value>.
The ApplyFilter() method is not declared as protected that can be overridden in a derived class, therefore the implementation modified the original ApplyFilter() definition rather than creating a derived version that refines the logic.
Everything else beyond that point is unchanged and this customized webpart displays the custom filtered set of worklist items from the process instances that match the process field value mentioned in the query.
Summary
We were able to provide a webpart that really trims the noise and shows only the Worklist items and process instances that are relevant to the user.
About the Authors
This article is written by Viraj Bais and Surya Penmetsa from Netwoven. Viraj Bais is the CTO of Netwoven and Surya Penmetsa is a principal consultant with Netwoven. Viraj and Surya specialize in the design and implementation of highly scalable solutions with SharePoint 2010, K2 Blackpearl and advanced .NET applications. Netwoven specializes in the design and implementation of Enterprise Content Management, Business Intelligence, Business Process Management, Cloud Services and Mobile Applications. For additional information, please contact us at info@netwoven.com.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:58pm</span>
|
I read a recent story from Bloomberg BNA about plummeting revenues at McDonald’s—and the company's proposal to fix the problem with "toasted buns" and "grilled burger patties"—and had two immediate reactions. First, Burger King has been saying this for years. Char-grilled flavors arebetter (granted, I'm biased). Second—and more important—is there another cause? Is a toasty bun really the answer to a decade of dwindling revenues? Market research plays a key role in improving products, but there are other leading indicators of decline at McDonald’s: increased operational costs, greater voluntary turnover from management staff, lowest-ever customer service scores. These are people problems,...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:58pm</span>
|
In this blog post we will discuss K2 BlackPearl Mangement worklist webpart customization.
K2 offers Management Work list webpart and Work list webpart for managing work lists. However, there are situations when the webparts need to be customized to meet additional business requirements. Recall that the out of box K2 Management Worklist web part requires the user to have admin rights on the process and presents all process instances of that process running in the system without providing any filtering functionality?
Disassembling the appropriate K2 assemblies and inspecting the source code of ManagementWorklistWebPart class, we can see that the bulk of the processing to fetch the data to be displayed happens in the LoadWPData() method and a fair bit of the processing is performed using classes declared internal to the assembly. The end result of the processing is the creation of a DataTable object containing data for all process instances and assigned to a class member. There are two different options to enhance this class:
Option 1 - Override this method in a class derived from the out-of-box ManagementWorklistWebPart class.
Option 2 - Invoke the base class’s LoadWPData() method in the derived class followed by applying the desired filter on the created DataSet.
Below is an implementation of Option 2. To prune down the unfiltered data set, it’s joined with a data set containing only rows of interest. This 2nd data set can be created in any desirable way (e.g. using filter parameters exposed as web part properties). In our implementation it’s created using a process SmartObject’s GetList() method, which is then filtered based on the Project ID stored as Process Data Field in the current process instance. The joined data set is stored in the class member _dataTable, from where it’s rendered using other methods in the base class.
public override void LoadWPData()
{
try
{
// Create the unfiltered data set using base K2 class
base.LoadWPData();
// Create the filtered data set using DHF SmartObject and join with Unfiltered set to remove unwanted rows
DHFProcessData dhf = new DHFProcessData();
dhf.Status = 1;
if (bFilteringInEffect)
{
DataTable dt = dhf.GetList();
IEnumerable<DataRow> query =
from processInst1 in _dataTable.AsEnumerable()
join processInst2 in dt.AsEnumerable()
on processInst1.Field<string>("ProcInstId") equals processInst2.Field<string>("ProcessInstanceId")
select processInst1;
DataTable boundTable = query.CopyToDataTable();
this._dataTable = boundTable;
}
}
catch (Exception ex)
{
this._errorList.Add(ex.Message);
}
}
About the Authors:
This article is written by Viraj Bais and Surya Penmetsa from Netwoven. Viraj Bais is the CTO of Netwoven and Surya Penmetsa is a Principal Consultant with Netwoven. Both Viraj and Surya specialize in the design and implementation of highly scalable solutions with SharePoint, K2, .NET, and many other technologies. Netwoven is a professional services firm founded by ex-Microsoft employees. Netwoven specializes in the design and implementation of Enterprise Content Management, Business Intelligence, Business Process Management, Cloud Services and mobile applications. For additional information, please contact us at info@netwoven.com.
Netwoven
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:58pm</span>
|
"Winter is Coming" is a key theme of the popular HBO series Game of Thrones. With its warning of constant vigilance, the meaning is clear - no matter how good or calm things seem now, the good times and serenity won’t last forever…and you need to prepare and be proactive to ensure you’re ready for when the tide turns. While talk of the long, dark winter in Game of Thrones centers around the inevitable attacks of the White Walkers and their ability to conquer the Seven Kingdoms if not unchecked, he could easily have been speaking...
SHRM
.
Blog
.
<span class='date ' tip=''><i class='icon-time'></i> Jul 27, 2015 12:58pm</span>
|