Wednesday, March 25, 2015

White Collar Productivity: A Review

It's rare that I get excited about a new area of research... but I'm excited.

The business case for information governance is tricky. We all know that poor information management leads to worker inefficiencies but it's difficult to build a business case. Will saving a worker's time really result in better productivity? It depends who you ask. IT will say "absolutely". A contrary CFO will say "that's a soft benefit! No soup for you."

One of the challenges with researching this issue is that it has become so conflated with technological hubris. Vendors tell us that that have many solutions for overcoming issues of white collar productivity and a small investment will make all of the problems go away. Somewhat strangely, the problem was very similar in the 1970s and 1980s but the "vendors" were providers of office furniture like Steelcase and Herman Miller!

Ideally, I want to explore solutions to this problem from an era when we weren't completely clueless (e.g., early railroads gave us many innovations but they were still operationally primitive) but hadn't yet been polluted by dot com techno-optimism. I found a potential approach in some of the work conducted by NASA in the mid-1980s. Perfect! Strangely, I discovered this work in the OPAC of my local university library by exploring alternate entries for the APQC! Apparently, in its early days it did some work for NASA. Of course, the OPAC coughed up some resources but the URLs were broken so I had to scrounge up the resources. Fortunately, the Internet never really forgets...

US Army Corps of Engineers -- Evaluating knowledge worker productivity : literature review (1994)

Ah, the Army Corps. It seems that I can't get away from these guys. They haunted by early work as a geotechnical engineer and now I find that they proceeded me in knowledge worker productivity.

The report opens with a clear statement: "Quantifying knowledge work tasks is difficult." The trigger for the creation of the report was the introduction of a KWS (Knowledge Worker System). It notes that productivity is a key concept and provides a basic meaure of "output divided by input" or (O/I). The key measure is _productivity change_ between two intervals, resulting in a particular percentage.

Unfortunately, this approach isn't great for knowledge workers:

"As long as the workforce consisted largely of manufacturing jobs, these techniques were adequate. The early measurement techniques, however, are not well suite to 'white-collar' work because such work is not repetitive or simple."

Apparently there is a final USACERL technical report on this issue... somewhere.

Ideally, a new technology will basically change the linear input/output relationship, leading to a steeper line (i.e., increased productivity). But there are challenges with this view, namely: inefficiency, input/output changes (if inputs drop in quality, outputs will also drop), nonconstant returns (the O/I line might be a curve, a stepped line, or discontinuous).

There are many ways of defining productivity but there should be three objectives: to identify improvements; to decide how to reallocate resources; to determine how well goals have been met.

There is always a tension between _macroproductivity_ (at a national level), _microproductivity_ (at a business level), and _nanoproductivity_ (at a suborganization level). The challenge is with white collar work: "Knowledge work is all work whose output in mainly intangible, whose input is not clearly definable, and that allows a high degree of individual discretion in the task. This difference in work content requires different approaches to productivity evaluation."

A challenge with measuring knowledge worker productivity is that individual gains don't necessarily translate to others. In general, productivity should be measured at the work group level. Individual measurement is challenging:

"The nonroutine nature of knowledge work means that it is very difficult to measure a norm. There is no obvious average to observe and record, so any measure will be somewhat inaccurate."

The other challenge in what to actually measure:

"The work is so complex than an artificial indicator is evaluated rather than the actual work. Often, the indicator is chose because it is easily quantified. This approach ignores potentially important aspects of the output, such as quality."

Regardless, you need to measure. Collect data via inquiry, observation, or through system data or documentation.

One classification approach is to evaluate tasks by the lowest level of employee that can execute it and then compile a matrix to determine if workers are performing at, below, or above their level. Of course, this approach assumes complete task detail and that the inefficiencies lie within the individual.

Sink (1985) apparently developed some great acronyms for various techniques, including: Multi-Factor Productivity Measurement Model (MFPMM), Normative Productivity Measurement Methodology (NPMM), and Multi-Criteria Performance/Productivity Measurement Technique (MCP/PMT).

On measurement, there are a few best practices: get worker participation on establishing the productivity measures; if a process is too complex to measure, use a less complex sub-process; use the best measure, even if several different measures must be used; don't expect perfect accuracy -- it's about trends; finally, "measuring is better than not measuring".

Representing work is challenging but there are a few different hierarchies. The first dimension is the "components of work" for which blue-collar and white-collar work have different profiles:

  • Knowledge use. The amount and complexity of information required to do the work
  • Decision making. Application of knowledge to determine how to process the work.
  • Complexity. Difficulty of the job.
  • Time per job. Time spent completing the job.
  • Repetitive. A function done the same way every time.
  • Volume. Number of times the activity will occur in a given time cycle.
  • Skilled activity. Physical difficulty of performing the work; inversely relates to the mental difficulty or complexity. Some activities require both e.g., surgery.
  • Structured. Constraints on how, when, where, and what is done.

NOTE: The "skilled activity" component is defined as a physical dimension but we know that there are other types of learned/tacit skills.

There are few different techniques for work measurement:

  • Group 1: Complex setup, complex implementation. Predetermined time-motion studies, stop-watch studies, loggie
  • Group 2: Complex setup, simple implementation. Self-logging, sampling, counting.
  • Group 3: Simpler setup, moderate implementation. Committee, estimation.

The appendix contains an interesting definition for _knowledge_: "Relational information bout objects or groups of objects. Knowledge allows the work to use data in performing an activity."

Reference: USACERL Interim Report FF-94/27, Evaluating Knowledge Worker Productivity: Literature Review (

NASA -- R&D Productivity: New Challenges for the US Space Program (1985) (

These conference proceedings contain some interesting papers about white-collar productivity (along with a whole lot of mysterious highly technical space-age mumbo jumbo!).

I particularly like this document because it is so clearly not of the Internet age. The attention to printing and reproduction is so awesomely... quaint. Hopefully, it contains some ground truth.

There are three papers of particular interest in this thing:

  • White collar productivity improvement: a success story by Don Hutchinson and E.L. Fransen (518)
  • White collar productivity improvement in a government research and development administrative support organization by Bradley Baker (529)
  • White collar productivity improvement sponsored action research, executive summary and findings by Steven Leth (571)

(Although I also find some of the other titles compelling, like: "Space crew productivity: the driving factor in space station design")

Let's start with the Hutchinson and Fransen paper.

The paper is basically a case study from an October 1984 project at the McDonnell Douglas Astronautics Company. The project follows the APC model. The study was applicable to 33 employees of a financial controls department.

One of the challenges was that there were a lot of different and sometimes competing quality programs and there was little early feedback from the program to encourage progress.

The WBS basically looked like:
- Pilot introduction
-- management
-- employees
- Diagnosis
-- survey
-- interviews
-- synthesis
-- feedback
-- action items
- Objectives
-- management sessions
-- feedback
- Measurement
-- nominal group
-- integrate measures
-- assign weights
-- monitor/feedback
- Service (re)design
-- map service
-- identify needs
-- redesign/refine
- Team Development
-- identify interfaces
-- clarify roles
-- commit support
- Technology Parameters
-- review parameters
-- enlist vendor support
-- implementation

The initial presentation and meetings were met with skepticism by employees but the survey and results quickly solidified engagement. The paper then gives some excruciating detail on the process. The conclusion notes:

"Improvements accrue to each of the three groups when the members of those groups believe in the process."

Success indicators of the project included:
- Improvement in the quality of work attitudes, leadership, communication, participation, goal-setting, measurement and analysis, rewards and recognition, and resource utilization
- Improved user relationships with the department. This process started with the identification of products and services and identification of user perception. "Effective interaction begins to occur when the department is viewed through the eye of the user."
- Creation of a belief in management.

Key lessons included:
- management support was important.
- users needed to feel engaged
- user involvement was important
- focus is on "effectiveness", not "efficiency". Ultimately, effectiveness will drive efficiency.

"The key to productivity improvement is through the development of a recognition that it is a continuous process."

Overall, the study was interesting but not surprising. It could, however, be valuable reading for young analysts who aren't sure what a workshop process should look like.

Let's look at the second paper by Bradley Baker. It describes a similar process in the Procurement Division of the NASA-Lewis Research Center involving 108 persons.

Initial investigation indicated a few symptoms of a disengaged work force: "little or nothing has come out of [previous initiatives]", "decisions get made in the chief's offices without input from lower levels", "everyone in the Division at times feels isolated, cut off, or by-passed", "people follow the chain of command", etc.

The discovery phase indicated that different levels of management had different perspectives on the most important mandates and goals of the organization. There was some degree of schedule slippage and then challenges regarding the introduction of new proposed measures due to the lack of involvement of some parties:

"After the first meeting and the passing of several days, passions were calmed and in subsequent meeting, the Division Chief provided more visible support and some protection to the recommendations, while balancing this with an openness to rational, constructive comments."

The Task Force met weekly to monitor the progress of various subcommittees as they worked on the phases of Service Redesign, Teamwork, and Technology parameters.

APC recommended a methodology where the procurement processes was divided into specific parts. Each part was then assess on an as-is and to-be basis to generate a list of what were essentially requirements and recommendations. Each subgroup was headed by two task force members who then recruited "knowledgeable, helpful nonsupervisory employees and supervisors."

One challenge occurred in bringing forward recommendations without the sponsor present, resulting in "rocky" and "non-constructive" comments. Subsequently, recommendations were made in a retreat setting with the sponsor present.

And finally, on to the executive summary of the big APC study. Unfortunately, I can't find the details [n.b., the report is available as a historical novelty at the APQC site and is in a few library collections. See below].

The summary leads with same examples of how knowledge workers have changed their deliverables to focus on increased "effectiveness"... not necessarily "efficiency."

The summary notes that employees often view these initiatives as a cost/employee-cutting approach and feel alienated. They reference a 1982 study executed with Steelcase (listed -- but not available on Amazon) that notes that both the knowledge and process of white-collar process improvement is underdeveloped.

APC's approach is really about involvement and innovation to improve process "outputs". The paper describes the process (similar to above):

- Diagnosis phase
-- clarification of and agreement on the work unit's outputs and services
-- definition of user's needs and expectations
-- identification of leverage points for productivity gains
- Objectives phase
-- clarification of the unit's mission and purpose
-- creation of a vision for achieving the mission and purpose
-- objectives tied to the development and delivery of services
- Measurement phase
-- measures emphasizing service effectiveness and critical points
-- means to track and feed back data for problem solving
-- data useful for ongoing improvements
- Service (Re)Design
-- clear, agreed upon approaches to service development and delivery
-- services that are consistent with objectives and measures
-- improved capability to identify opportunities for improvements and execute changes
-- a framework for effective implementation of new office technology
- Team development
-- smoothed working relationship among coworkers and with other units for functional groups
-- agreement on back-up personnel and procedures
-- improved morale, enhanced cooperation, active participation
- Technological parameters
-- parameters for technology directly in support of services
-- more efficient performance of routine tasks
-- enhanced communication ability

After two years, various case studies indicated a variety of observations:

1. "white collar productivity improvement is founded on basic issues of vision, orientation, and management practices
2. "attention to 'operational'  issues will enable productivity improvement to take place
3. "white collar professionals require additional training in order to deliver their services effectively [n.b., but what is "training"?]
4. "administrative systems within an organization offer a major opportunity for productivity improvement
5. "measurement of white collar work is both possible and desirable
6. "technology, such as computer mediated systems or new office environmental designs, is best justified when linked to critical junctures for features of white collar services
7. "self-reliance is a key to ongoing productivity improvements
8. "white collar productivity improvement is dependent on seven critical success factors:
- a climate supportive of change, innovation, and risk-taking
- a vision for the future of the function that is shared among all employees
- emphasis on service issues and opportunities
- a flexible methodology, one the function can adapt to its own circumstances and business
- leadership by the function's managers, not by a consultant or lower-level employee
- technology directly linked to productivity leverage points
- involvement and 'buy-in' by most employees at all levels of the function."

Wow. So in the early days, these conversations were as much to encourage adoption of new office furniture as it was to encourage adoption of new technology!

And I found the original report. You can get it from the APQC but it costs like $50! Interlibrary loan? U of Guelph and U of Sask apparently have copies. And the Steelcase report is apparently at Western.


Post a Comment

Subscribe to Post Comments [Atom]

<< Home