Saturday, March 28, 2015

Application Value -- from KPIlibrary

Technical Value (TV) of application(s)

Level of efficient and effective technical business process adaptability of applications. Calculated by scoring (0=bad, 1=medium, 2=good) the following parameters with weighting:
  • A- Continuity of delivery organisation
  • B- Quality of organisation, processes, people and procedures
  • C- Response times to changing business goals
  • D- Quality of functional structure
  • E- Amount of corrective maintenance
  • F- Level of systems infrastructure based on standards
  • G- Level of indepencies of ICT components
  • H- User friendliness
  • I- Quality of meta definitions
  • J- Quality of technical documentation
  • K- Level of technical skills

Business Value (BV) of application(s)

Level of business process support of applications. Calculated by scoring (0=bad, 1=medium, 2=good) the following parameters with weighting:
  • A- Number of usergroups
  • B- Organisational risk
  • C- Support to management of organisation
  • D- Strategic coherence
  • E- Importance for other systems
  • F- Dependencies on other systems
  • G- Functional coverage
  • H- Image
  • I- Data quality
  • J- User documentation
  • K – Functional knowledge

I should probably, at some point, rationalize ASL, MOF, and ITIL on this stuff.

Friday, March 27, 2015

Knowledge worker (and RIM) metrics


Some metrics of value courtesy of KPIlibrary:

  • average frequency of updates of documents
  • document storage costs
  • % of documents in non-enterprise respositories
  • % of documents accessible to search engine
  • % searches resulting in a document being opened
  • Master Data Duplication Ratio
  • Average time to update employee records
  • Time to respond to legal discovery of records
  • % of documents not accessed regulary
  • % of duplications/document variations
  • Ratio of paper to electronic documents
  • Document change turn around time
  • Number of total documents changed as % of total documents
  • % of documents digitally archived
  • Number of documents that have not been removed after end-of-life


Not a bad start. The Public Record office of Victoria also gives us some guidelines (http://prov.vic.gov.au/wp-content/uploads/2011/05/1010g3.pdf)...

Actually, this document gives us a whole methodology and sample metrics, all based on ISO 15489 and ISO 31000. Nice.

And what is "ASL Applications Cycle Management"? It is, perhaps, something we should know.

Measuring knowledge worker productivity

Maybe I'm not done on the whole measurement thing. A 2004 paper by Ramirez seems to have some opinions: Measuring knowledge worker productivity : a taxonomy, Journal of Intellectual Capital.



The literature review introduces some key questions:

  • What is the objective of a task?
  • What are the outputs that need to be produced to accomplish the task?
  • How can the outputs be counted?
  • How much and what kind of resources are needed to produce the outputs?
  • What is a feasible and desirable operating plan for the next time period?
  • Can the measures be replicated and standardized?


There are a variety of different measurement methodologies: function point analysis; operations-based productivity measurement; data envelope analysis; efficiency, standard times, and operating efficiency; operation functional analysis; engineering operations analysis; administrative productivity indicator and multiple output productivity indicator; multi-minute measurement; achievement method -- completion of goals; normative productivity measurement methodology; percentage of time spent in value-added activities; professional time utilization; outcome as a measure; outcome input ratio; quality and activity;  economic value added; cross functional analysis; generator activity measurement technique; interviews and surveys; peer evaluations; macro, micro, and mid-KW productivity models (quality, schedule, cost, absenteeism, overtime, lost time, cost reduction).

Wow. That's quite a list. Unfortunately it reminds me of the fanciful machines of the Theatra machinara or the explosion of life represented in the Burgess Shale. Each of these approaches represents a lot of work but is likely some sort of evolutionary dead-end. Fortunately, Ramirez provides a summary of the dimensions we see represented in the models (by frequency):

  • Quantity. Outputs (quantities) and outcomes (satisfaction, etc.)
  • Costs and/or profitability.
  • Timeliness.
  • Autonomy.
  • Efficiency (or "doing things right")
  • Quality
  • Effectiveness (or "doing the right things")
  • Customer satisfaction
  • Innovation/creativity
  • Project success
  • Responsibility/importance of work
  • Perception of productivity
  • Absenteeism

Quantity is the most referenced metric. Unfortunately, it's also the hardest to determine since so many organizations struggle with the issue of "what do we do?".

Thursday, March 26, 2015

More lessons on knowledge capture from NASA

It turns out that I'm not quite done with the NASA documents. I stumbled across another interesting report JSC-66136r1 Space shuttle guidance, navigation, and rendezvous knowledge capture reports (October 2011).



Knowledge capture is a crucial issue for NASA given the nature of its mission: big, dangerous, expensive, constantly shifting, etc. The report talks about the challenges of actually capturing knowledge largely due to granularity. Histories, for example, are interesting but typically too general for specialist use while other types of codification such as computer code and detailed reports are simply too narrow. It's a Goldilocks problem.

The good stuff starts on page 32. The reports notes that "knowledge capture and management" includes four steps:

  1. People
  2. Content
  3. Process
  4. Information Technology

The report then laments that knowledge capture efforts often start with IT and work backwards, to poor results. Specifically, it is difficult to find "technical subject matter experts that possess the written, verbal, and graphic communications skills need[ed] to effectively perform knowledge capture".

The authors are quite adamant that the lower steps in the model are most important and note that: "While a significant amount of knowledge capture... can be performed at the lower levels of corporations and government agencies, visible leadership and support from senior management is needed to ensure the success and continuity of such efforts, particularly for the creation and implementation of knowledge management processes... and identification, procurement, and integration of information technology."

Some knowledge capture techniques include:

  • just-in-time learning via social networking, but it is susceptible to limitations of human memory
  • formal reports
  • informal memos written in complete sentences with tables, illustrations, and references
  • well-written status reports


"Charts with bullet points and spreadsheets omit much background information that may be understood by the original audience, but will not be known to future researchers. This makes charts and spreadsheets difficult to learn from."

Documentation should include experiences, not just lessons learned. People tend to remember stories and they are effective for teaching and transferring knowledge. Exemplars of good reports include "Hypergolic propellants: the handling hazards and lessons learned from use" the collection of "System Failure Case Studies".

A multi-step process would include:

  1. Identification -- conduct research to identify lessons. This involves primary source materials and interviews. Research and interviewing skills are required to identify key knowledge, experience and lessons.
  2. Creation -- create a story or narrative using hte results of the research in step 1. This requires skills in reasoning an din verbal, written, and visual communication.
  3. Capture -- document the story in some form of media (formal report, informal memo, presentation, training material, procedure, case study, video, audio, etc.)
  4. Sharing/Retrieval -- make the media available.

Presentations are troublesome artifacts.  Additional valuable information includes: why was the presentation created? What discussion was conducted during the presentation? What action was taken, if any, as a result?

A basic set of questions includes:

  1. What did we do?
  2. Why did we do it?
  3. When did we do it?
  4. How did we do it?
  5. Why did we do it that way?
  6. What happened?
  7. What challenges did we encounter?
  8. What did we learn?
  9. Is there something we wish we would have done differently?

Interesting. The references lead me to two other docs:

Goodman's Best practices for researching and documenting lessons learned




  • The scope of the report must be determined before starting out.
  • People working on lessons learned projects must have some aptitude for writing, researching, and creating engaging documents.
  • Define the report audience, scope, and outline
  • Lessons learned reports are not requirements documents
  • Address both successful and challenged projects
  • Keep lessons learned separate from technical history or process descriptions
  • A section should have only one author
  • Standardize production tools, review processes, and editorial process.
  • Recognize that documentation can come from anywhere
  • Interview SMEs (but recognize that "the optimal source of lessons learned is usually documentation created at the time the lessons were learned")
  • Explain the goals, process, and value of lessons learned to the SMEs
  • Document lessons learned during the project and at project conclusion

Writing the report

  • Stick to the facts
  • Be objective
  • Document lessons so that they will be easy to understand and apply
  • Include high-level background material for context
  • Avoid duplicating low-level detail that is available elsewhere

Review and revision

  • Review SMEs who did not contribute
  • Build consensus
  • Assign priorities to suggestions for expanding the report scope


The second document is Goodman's Knowledge capture and management for space flight systems



The introduction gives us an interesting statement: "Follow the transition from the development to the flight phase, loss of underlying theory and rationale governing design and requirements occur through a number of mechanisms. This degrades the quality of engineering work resulting in increased life cycle costs and risk to mission success and safety of flight."

Knowledge capture is important for the development of professionals. In most cases, engineers and others don't have the opportunity to work on many small projects and develop appropriate technical skills.

Sources of knowledge include:

  1. Program histories
  2. Textbooks, university classes, short courses/continuing education
  3. Journal articles and conference papers
  4. Internal sources
    1. presentations and technical reports
    2. software requirements documentation
    3. derivations of equations
    4. databases and archives


Brain book? See 32.

NASA maintains an informal knowledge base that allows users to submit/capture informal artifacts in a very loosely structured manner. Donors get their material back (and, if applicable, a digital copy).

The Columbia accident reports clearly pointed to the limitations of presentations as a means of knowledge transfer. Detailed reports are better.




The scent of vinegar and the nature of documents

Introduction to Chapter 7 Reading the Background of Brown and Duguid's Social Life of Information (pg. 173-174):

"I was working in an archive of a 250-year-old business, reading correspondence form about the time of the American Revolution. Incoming letters were stored in wooden boxes about the size of a standard Styrofoam picnic cooler, each containing a fair portion of dust as old as the letters. As opening a letter triggered a brief asthma attack, I wore a scarf tied over my nose and mouth. Despite my bandit's attire, my nose ran, my eyes wept, and I coughed, wheezed, and snorted. I longed for a digital system that would hold the information from the letters and leave the paper and dust behind.

"One afternoon, another historian came to work on a similar box. He read barely a word. Instead, he picked out bundles of letters and, in a move that sent my sinuses into shock, ran each letter beneath his nose and took a deep breath, at times almost inhaling the letter itself but always getting a good dose of dust. Sometimes, after a particularly profound sniff, he would open the letter, glance at it briefly, make a note and move on.

"Choking behind my mask, I asked him what he was doing. he was, he told me, a medical historian. (A profession to avoid if you have asthma.) He was documenting outbreaks of cholera. When the disease occurred in a a town in the eighteenth century, all letters from that town were disinfected with vinegar to prevent the disease from spreading. By sniffing for the faint traces of vinegar that survived 250 years and noting the date and source of the letters, he as able to chart the progress of cholera outbreaks.

"His research threw new light on the letters that I was reading. Now cheery letters telling customers and creditors that all was well, business thriving, and the future rosy read a little differently if a whiff of vinegar came off the page. Then the correspondent's cheeriness might be an act to prevent a collapse of business confidence -- unaware that he or she would be betrayed by the scent of vinegar."

- Paul Duguid, Trip Report from Portual

Wednesday, March 25, 2015

White Collar Productivity: A Review

It's rare that I get excited about a new area of research... but I'm excited.

The business case for information governance is tricky. We all know that poor information management leads to worker inefficiencies but it's difficult to build a business case. Will saving a worker's time really result in better productivity? It depends who you ask. IT will say "absolutely". A contrary CFO will say "that's a soft benefit! No soup for you."

One of the challenges with researching this issue is that it has become so conflated with technological hubris. Vendors tell us that that have many solutions for overcoming issues of white collar productivity and a small investment will make all of the problems go away. Somewhat strangely, the problem was very similar in the 1970s and 1980s but the "vendors" were providers of office furniture like Steelcase and Herman Miller!

Ideally, I want to explore solutions to this problem from an era when we weren't completely clueless (e.g., early railroads gave us many innovations but they were still operationally primitive) but hadn't yet been polluted by dot com techno-optimism. I found a potential approach in some of the work conducted by NASA in the mid-1980s. Perfect! Strangely, I discovered this work in the OPAC of my local university library by exploring alternate entries for the APQC! Apparently, in its early days it did some work for NASA. Of course, the OPAC coughed up some resources but the URLs were broken so I had to scrounge up the resources. Fortunately, the Internet never really forgets...

US Army Corps of Engineers -- Evaluating knowledge worker productivity : literature review (1994)

Ah, the Army Corps. It seems that I can't get away from these guys. They haunted by early work as a geotechnical engineer and now I find that they proceeded me in knowledge worker productivity.



The report opens with a clear statement: "Quantifying knowledge work tasks is difficult." The trigger for the creation of the report was the introduction of a KWS (Knowledge Worker System). It notes that productivity is a key concept and provides a basic meaure of "output divided by input" or (O/I). The key measure is _productivity change_ between two intervals, resulting in a particular percentage.

Unfortunately, this approach isn't great for knowledge workers:

"As long as the workforce consisted largely of manufacturing jobs, these techniques were adequate. The early measurement techniques, however, are not well suite to 'white-collar' work because such work is not repetitive or simple."

Apparently there is a final USACERL technical report on this issue... somewhere.

Ideally, a new technology will basically change the linear input/output relationship, leading to a steeper line (i.e., increased productivity). But there are challenges with this view, namely: inefficiency, input/output changes (if inputs drop in quality, outputs will also drop), nonconstant returns (the O/I line might be a curve, a stepped line, or discontinuous).



There are many ways of defining productivity but there should be three objectives: to identify improvements; to decide how to reallocate resources; to determine how well goals have been met.

There is always a tension between _macroproductivity_ (at a national level), _microproductivity_ (at a business level), and _nanoproductivity_ (at a suborganization level). The challenge is with white collar work: "Knowledge work is all work whose output in mainly intangible, whose input is not clearly definable, and that allows a high degree of individual discretion in the task. This difference in work content requires different approaches to productivity evaluation."

A challenge with measuring knowledge worker productivity is that individual gains don't necessarily translate to others. In general, productivity should be measured at the work group level. Individual measurement is challenging:

"The nonroutine nature of knowledge work means that it is very difficult to measure a norm. There is no obvious average to observe and record, so any measure will be somewhat inaccurate."

The other challenge in what to actually measure:

"The work is so complex than an artificial indicator is evaluated rather than the actual work. Often, the indicator is chose because it is easily quantified. This approach ignores potentially important aspects of the output, such as quality."

Regardless, you need to measure. Collect data via inquiry, observation, or through system data or documentation.

One classification approach is to evaluate tasks by the lowest level of employee that can execute it and then compile a matrix to determine if workers are performing at, below, or above their level. Of course, this approach assumes complete task detail and that the inefficiencies lie within the individual.

Sink (1985) apparently developed some great acronyms for various techniques, including: Multi-Factor Productivity Measurement Model (MFPMM), Normative Productivity Measurement Methodology (NPMM), and Multi-Criteria Performance/Productivity Measurement Technique (MCP/PMT).

On measurement, there are a few best practices: get worker participation on establishing the productivity measures; if a process is too complex to measure, use a less complex sub-process; use the best measure, even if several different measures must be used; don't expect perfect accuracy -- it's about trends; finally, "measuring is better than not measuring".

Representing work is challenging but there are a few different hierarchies. The first dimension is the "components of work" for which blue-collar and white-collar work have different profiles:

  • Knowledge use. The amount and complexity of information required to do the work
  • Decision making. Application of knowledge to determine how to process the work.
  • Complexity. Difficulty of the job.
  • Time per job. Time spent completing the job.
  • Repetitive. A function done the same way every time.
  • Volume. Number of times the activity will occur in a given time cycle.
  • Skilled activity. Physical difficulty of performing the work; inversely relates to the mental difficulty or complexity. Some activities require both e.g., surgery.
  • Structured. Constraints on how, when, where, and what is done.


NOTE: The "skilled activity" component is defined as a physical dimension but we know that there are other types of learned/tacit skills.

There are few different techniques for work measurement:

  • Group 1: Complex setup, complex implementation. Predetermined time-motion studies, stop-watch studies, loggie
  • Group 2: Complex setup, simple implementation. Self-logging, sampling, counting.
  • Group 3: Simpler setup, moderate implementation. Committee, estimation.


The appendix contains an interesting definition for _knowledge_: "Relational information bout objects or groups of objects. Knowledge allows the work to use data in performing an activity."

Reference: USACERL Interim Report FF-94/27, Evaluating Knowledge Worker Productivity: Literature Review (http://www.dtic.mil/dtic/tr/fulltext/u2/a283866.pdf)

NASA -- R&D Productivity: New Challenges for the US Space Program (1985) (http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19860005687.pdf)

These conference proceedings contain some interesting papers about white-collar productivity (along with a whole lot of mysterious highly technical space-age mumbo jumbo!).

I particularly like this document because it is so clearly not of the Internet age. The attention to printing and reproduction is so awesomely... quaint. Hopefully, it contains some ground truth.

There are three papers of particular interest in this thing:

  • White collar productivity improvement: a success story by Don Hutchinson and E.L. Fransen (518)
  • White collar productivity improvement in a government research and development administrative support organization by Bradley Baker (529)
  • White collar productivity improvement sponsored action research, executive summary and findings by Steven Leth (571)

(Although I also find some of the other titles compelling, like: "Space crew productivity: the driving factor in space station design")



Let's start with the Hutchinson and Fransen paper.

The paper is basically a case study from an October 1984 project at the McDonnell Douglas Astronautics Company. The project follows the APC model. The study was applicable to 33 employees of a financial controls department.

One of the challenges was that there were a lot of different and sometimes competing quality programs and there was little early feedback from the program to encourage progress.

The WBS basically looked like:
- Pilot introduction
-- management
-- employees
- Diagnosis
-- survey
-- interviews
-- synthesis
-- feedback
-- action items
- Objectives
-- management sessions
-- feedback
- Measurement
-- nominal group
-- integrate measures
-- assign weights
-- monitor/feedback
- Service (re)design
-- map service
-- identify needs
-- redesign/refine
- Team Development
-- identify interfaces
-- clarify roles
-- commit support
- Technology Parameters
-- review parameters
-- enlist vendor support
-- implementation

The initial presentation and meetings were met with skepticism by employees but the survey and results quickly solidified engagement. The paper then gives some excruciating detail on the process. The conclusion notes:

"Improvements accrue to each of the three groups when the members of those groups believe in the process."

Success indicators of the project included:
- Improvement in the quality of work attitudes, leadership, communication, participation, goal-setting, measurement and analysis, rewards and recognition, and resource utilization
- Improved user relationships with the department. This process started with the identification of products and services and identification of user perception. "Effective interaction begins to occur when the department is viewed through the eye of the user."
- Creation of a belief in management.

Key lessons included:
- management support was important.
- users needed to feel engaged
- user involvement was important
- focus is on "effectiveness", not "efficiency". Ultimately, effectiveness will drive efficiency.

"The key to productivity improvement is through the development of a recognition that it is a continuous process."

Overall, the study was interesting but not surprising. It could, however, be valuable reading for young analysts who aren't sure what a workshop process should look like.



Let's look at the second paper by Bradley Baker. It describes a similar process in the Procurement Division of the NASA-Lewis Research Center involving 108 persons.

Initial investigation indicated a few symptoms of a disengaged work force: "little or nothing has come out of [previous initiatives]", "decisions get made in the chief's offices without input from lower levels", "everyone in the Division at times feels isolated, cut off, or by-passed", "people follow the chain of command", etc.

The discovery phase indicated that different levels of management had different perspectives on the most important mandates and goals of the organization. There was some degree of schedule slippage and then challenges regarding the introduction of new proposed measures due to the lack of involvement of some parties:

"After the first meeting and the passing of several days, passions were calmed and in subsequent meeting, the Division Chief provided more visible support and some protection to the recommendations, while balancing this with an openness to rational, constructive comments."

The Task Force met weekly to monitor the progress of various subcommittees as they worked on the phases of Service Redesign, Teamwork, and Technology parameters.

APC recommended a methodology where the procurement processes was divided into specific parts. Each part was then assess on an as-is and to-be basis to generate a list of what were essentially requirements and recommendations. Each subgroup was headed by two task force members who then recruited "knowledgeable, helpful nonsupervisory employees and supervisors."

One challenge occurred in bringing forward recommendations without the sponsor present, resulting in "rocky" and "non-constructive" comments. Subsequently, recommendations were made in a retreat setting with the sponsor present.



And finally, on to the executive summary of the big APC study. Unfortunately, I can't find the details [n.b., the report is available as a historical novelty at the APQC site and is in a few library collections. See below].

The summary leads with same examples of how knowledge workers have changed their deliverables to focus on increased "effectiveness"... not necessarily "efficiency."

The summary notes that employees often view these initiatives as a cost/employee-cutting approach and feel alienated. They reference a 1982 study executed with Steelcase (listed -- but not available on Amazon) that notes that both the knowledge and process of white-collar process improvement is underdeveloped.

APC's approach is really about involvement and innovation to improve process "outputs". The paper describes the process (similar to above):

- Diagnosis phase
-- clarification of and agreement on the work unit's outputs and services
-- definition of user's needs and expectations
-- identification of leverage points for productivity gains
- Objectives phase
-- clarification of the unit's mission and purpose
-- creation of a vision for achieving the mission and purpose
-- objectives tied to the development and delivery of services
- Measurement phase
-- measures emphasizing service effectiveness and critical points
-- means to track and feed back data for problem solving
-- data useful for ongoing improvements
- Service (Re)Design
-- clear, agreed upon approaches to service development and delivery
-- services that are consistent with objectives and measures
-- improved capability to identify opportunities for improvements and execute changes
-- a framework for effective implementation of new office technology
- Team development
-- smoothed working relationship among coworkers and with other units for functional groups
-- agreement on back-up personnel and procedures
-- improved morale, enhanced cooperation, active participation
- Technological parameters
-- parameters for technology directly in support of services
-- more efficient performance of routine tasks
-- enhanced communication ability

After two years, various case studies indicated a variety of observations:

1. "white collar productivity improvement is founded on basic issues of vision, orientation, and management practices
2. "attention to 'operational'  issues will enable productivity improvement to take place
3. "white collar professionals require additional training in order to deliver their services effectively [n.b., but what is "training"?]
4. "administrative systems within an organization offer a major opportunity for productivity improvement
5. "measurement of white collar work is both possible and desirable
6. "technology, such as computer mediated systems or new office environmental designs, is best justified when linked to critical junctures for features of white collar services
7. "self-reliance is a key to ongoing productivity improvements
8. "white collar productivity improvement is dependent on seven critical success factors:
- a climate supportive of change, innovation, and risk-taking
- a vision for the future of the function that is shared among all employees
- emphasis on service issues and opportunities
- a flexible methodology, one the function can adapt to its own circumstances and business
- leadership by the function's managers, not by a consultant or lower-level employee
- technology directly linked to productivity leverage points
- involvement and 'buy-in' by most employees at all levels of the function."

Wow. So in the early days, these conversations were as much to encourage adoption of new office furniture as it was to encourage adoption of new technology!

And I found the original report. You can get it from the APQC but it costs like $50! Interlibrary loan? U of Guelph and U of Sask apparently have copies. And the Steelcase report is apparently at Western.

Tuesday, March 24, 2015

Study: Adobe/IDC "The document disconnect: hidden opportunity, big payoff"

The Study:

  • Title: The document disconnect: hidden opportunity, big payoff
  • March 2015 (IDC InfoBrief, sponsored by Adobe)
  • Global, web-based survey of 1518 line of business leaders, split between USA, UK, France, Germany, Japan, and Australia. 
  • 60% of respondents came from large companies (+1000 employees)


Results:

  • Management in "sales, HR, procurement, and other departments" estimate that fixing "document disconnect" would provide:
    • 36% increase in revenue
    • 30% cost reduction
    • 23% risk reduction
  • 46% have "impaired ability to plan, forecast and budget due to lack of visibility"
  • 36% of time is spent on administrative tasks
  • 76% say document process issues "impact revenue recognition or create auditor issues"
  • 63% of information workers use a mobile device
  • 45% say customers want to interact using mobile
  • 73% spend time working at a location other than their office
  • 52% have internal systems that don't "talk" to each other
  • 50% say external collaborators use different systems
  • 45% say they get documents missing signatures or approvals
  • 46% are unsure if they have copies of signed agreements
  • 51% have lost or misfiled documents
  • 55% can't tell whether documents have been viewed/reviewed/signed by the appropriate people
  • 24% say document routing, reviewing, and approval takes too long
  • 43% must use several disconnected system, often rekeying or copy/pasting
  • 46% say that better document processes would reduce cycle time
  • 72% agree that better document processes would improve brand and/or customer satisfaction
  • 45% say departmental productivity would improve
  • 24% say compliance and business risk would be reduced
  • 23% say process visibility and agility would increase


Monday, March 23, 2015

Gracie Barra's Positional Hierarchy

I have a side project in mind. I'd like a way to automate or organize my training journal. Some key fields would include:

  • Grips
  • Steps
  • Precursor techniques
  • Chain techniques
  • Applicable videos/references

One of the challenges is coming with a consistent naming convention for techniques. Judo has some consistency but BJJ really seems to be a bit of hodge podge with each instructor taking their own approach. GB's hierarchy might of some benefit:

  • Rear mount
  • Mount
  • Knee on belly
  • Side control
  • Half mount
  • Guard top/guard bottom
  • Turtle top/turtle bottom
  • Half guard bottom
  • Side control bottom
  • Knee on belly bottom
  • Mount bottom
  • Rear mount bottom


Of course, there are any number of other options here like x-guard, de la riva, 50/50, etc. Maybe these options could be lumped into "specials" since every taxonomy needs an "other" category.



Wikipedia's "wisdom of crowds" also has an interesting hierarchy:

  • Clinching
  • Takedowns
  • Throws
  • Sprawling
  • Submission holds
  • -- Joint locks
  • -- -- Armlocks
  • -- -- -- Americana
  • -- -- -- Armbar
  • -- -- -- Chicken wing
  • -- -- -- Hammerlock
  • -- -- Leglocks
  • -- Chokeholds and strangles
  • -- Clinch holds
  • -- -- Bear hugs
  • -- -- Collar ties
  • -- -- Overhooks
  • -- -- Pinch grip ties
  • -- -- Underhooks
  • -- Compression locks
  • -- Pain compliance
  • -- Pinning holds
  • Securing/Controlling/Pinning techniques
  • Escapes
  • Turnovers
  • Reversals/Sweeps

We could keep breaking this taxonomy down but there are some challenges. For example, is a crank a pain compliance technique or a joint lock? I suppose we would have to use user warrant to figure it out.

Study: Adobe's "Paper Jam"

Study:

  • Adobe released "Paper Jam: Why Documents are Dragging Us Down" in March 2015. 
  • The study collected data from 5,038 "Office Professionals" in February of 2015. 
  • The n-counts were about 1000 from each of the USA, UK, Germany, France, and Australia.



Results:

  • 83% of respondents feel their work success and productivity is hindered by outdated ways of working with documents.
  • 61% of workers (69% USA) would change jobs if the benefit was a drammatic reduction in paper work.
  • 28% (34% USA) feel that inefficient processes hold back career advancement
  • 55% (61% USA) feel that inefficient processes distract them from more important tasks
  • 43% feel that email attachments make life more complicated
  • Inability to find known documents (82%) and version control (78%) are the most frustrating document problems
  • 43% (49% USA) of professionals have lost important electronic information
  • 52% of professoinals "admit to being emotionally attached to paper documents"
  • 81% believe that information overload in a reality. Of those, 68% blame a "constant stream of new emails" as the biggest cause; 44% blame "social media newsfeed"; 36% blame "instant messaging"; 34% blame "incoming phone calls"; 26% blame "unsorted paper piles"; and 24% blame "interactions with people".

Cost of paper

Estimating the cost of paper can be challenging. In a recent blog post, Adobe provided some indication of costs.

Adobe estimates that Adobe Reader was opened over 4 billion times in 2014. The environmental cost of printing those documents would be:

  • 2 billion pounds of paper
  • 17 million trees
  • 11.4 billion gallons of water
  • enough energy to power a city the size of Cambridge England for one year


Adobe extends this analysis to users of Adobe e-sign services to estimate that it has saved:

  • 89 million gallons of water
  • 31 million pounds of wood
  • 132,000 trees
  • $42 million in paper and printing costs
  • 8000 tons of paper
Its GreenMeter lets organizations run these calculations for themselves.

Sunday, March 22, 2015

Virtual Training 2015/03/22 #012.2

Kimura

I've discovered that I can use a few videos to work on some of the challenges that I've run into on the mats. Today's focus is on getting that appropriate upright kimura grip. I'm using one of Raphael Lovato's videos (https://www.youtube.com/watch?v=S1NpPbMGJ6I).

He starts the position from side control. He notes that a good side control requires hip-to-hip contact. He traps the outside arm with a really strong hook, even if he has to give up the cross face. A cue is to really keep that far elbow off the ground.

He also takes out the near arm that is probably blocking your hip. He pushes the arm down with the formerly cross-facing arm and then pins that arm with his shin like in a crucifix pin. He then windshield wipers his legs to pin the arm with his other leg.

Keep a strong hook on the arm using your head to pin the arm. Lift and pull the trapped arm keeping your weight on it. You almost want to pull the arm all the way to floor with your opponent up on their side. Don't let them grab their belt!

Step over their head. Switch arms and get the kimura grip. Basically, the hand that is on the side of your opponent's face is the one that will be gripping your opponents hand. You will probably have to switch grips. Think "face palm". Lovato is keen on the monkey grip. He notes that you want to curl your wrists to keep the tight position.

Getting the submission requires you to keep their elbow tight to your chest and to use a whole body movement.

There are some challenges to this approach, notably, I have to give up the sankajo on the wrist due to the monkey grip.

The other tip that I picked up came from Roy Harris. In one of his videos he demonstrates that you can get the kimura and americana more quickly if their arm is more extended... but you give up some degree of control. To get this submission you really need to get your weight over your opponent's upper arm/shoulder.

UPDATE -- an old Dean Lister video showed another interesting tip for getting the conventional kimura, particularly in no-gi where you don't have the belt grab. He doesn't step over the head. Instead, he stays low and slide backwards so that his chin is basically on his partner's non-entangled arm. He still gets the tap.

Labels: ,