American Printer's mission is to be the most reliable and authoritative source of information on integrating tomorrow's technology with today's management.

Building better binderies through benchmarking

Aug 15, 2003 12:00 AM

         Subscribe in NewsGator Online   Subscribe in Bloglines

This article is excerpted from the book, "Benchmarking the Bindery." To order, contact NAPL at (800) 642-6275, option 3.

Many companies have a wealth of data but seldom share it with the people producing its products. Benchmarking provides a means for an organization to identify, gather and share its accumulated information.

When properly applied, benchmarking leads to continual process improvement. It helps get people out of their day-to-day ruts so that they can make meaningful, sustainable process improvements.

Some bindery supervisors claim that benchmarking their department is impossible, given the variety of product sizes and paper stocks, as well as the number of machine pockets used. If this were true, however, bindery processes (machine speed, set-up times and unscheduled stops) would be in perpetual chaos. There would be no statistical predictability to a given event. But we have observed statistically predictable output from every bindery machine that we have measured, including saddlestitchers, perfect binders, folders and cutters.

The data-collection system in most binderies is used primarily for job costing and scheduling, rather than for measuring and managing processes. The system should provide accurate measurements for:

  • Run time

  • Makeready time

  • Stop time

  • Gross production

  • Net production

  • Number of makereadies

  • Number of machine stops.

Using this data, the supervisor can determine the amount of work produced—as well as efficiency, waste rates and any problems that may occur.

Data-collection pitfalls
Many bindery departments’ data-collection efforts rely on time tickets completed by machine operators. A ticket typically includes the job name, a code for the specific function (run, makeready, stop), as well as the quantity of work completed. Unfortunately, some operators fill out the time ticket at the end of a shift rather than as their shift progresses. And, some may be tempted to make the numbers meet a supervisor’s expectations. If the supervisor stresses run speeds, for example, run speeds may improve while makereadies lag.

Bindery supervisors should review and approve all time tickets before the data is recorded. Since decisions will be based on the data from these time tickets, it’s essential to correct any errors immediately.

Once data has been collected, it can be entered into a database to compare/benchmark current vs. past performance levels. Fig. 1 illustrates a file for creating benchmarking charts. (Fig. 1 not available online.)

The numbers under the italicized headings are calculated automatically using the equation feature in the spreadsheet software. The last column, "prior year," is the benchmark for the current year’s level of net production. The goal, obviously, is to operate the folder at a higher level of production compared to the prior year. Benchmarks for other categories, such as makeready average, waste and gross speed, can be added simply by inserting an additional row of data.

From the Fig. 1 data, we can calculate:

  • Gross run speed (net folds/run hours) ........... 9,550 folds per hour

  • Net run speed (net folds/total hours; total hours calculated by adding run, MR, stop and non-charge times) ............ 5,122 folds per hour

  • Waste (gross count - net count) ........... 8,014 folds

  • Percent waste (waste folds/net folds) ........... 3.9 percent

  • Average makeready time (makeready hours/ number of makereadies) ........... 22.4 minutes

  • Average stop-time duration (stop hours/ number of unscheduled stops) ........... 32.2 minutes

A baker’s dozen weeks of data

Granted, with only one week’s worth of production entered, this table does not tell us much. We need to accumulate several more weeks of data before we can create benchmarking charts. From our experience, 13 "rolling" weeks of measurements is a short enough period for the data to be timely, but long enough to spot meaningful trends. (Rolling means that new data is added weekly and the oldest week is removed from the chart.) See Fig. 2 for an example of production totals across 13 weeks. (Fig. 2 not available online.)

Using the "net run speed" and the "prior year" columns, we can determine a folder’s average net run speed, as shown in Fig. 3. (Fig. 3 not available online.)

The consistency and predictability of this data often surprises experienced bindery supervisors. They expect to see a tremendous amount of variation in net run speed, based on the workload, type of paper used, number of folds required and the run length of the job.

Some supervisors have asked why specifications or "work mix" aren’t mentioned. Remember that we are measuring the process. We are not concerned about the variables, which are statistically predictable and part of the ongoing process. Consider, for example, the folded papers’ basis weight. While the paper type affects the run speed of the folder, this variation repeats itself and is therefore predictable and part of the ongoing process of running a folder. This data simply represents the ongoing performance of the process.

Understanding numerical data and process variation
As shown in Fig. 2, over a 13-week period, the net run speeds varied from a high of 5,886 folds per hour to a low of 3,845. Was this variation normal and merely attributable to the process variables or was it abnormal, signifying a process change (either good or bad)? Should you reward the operators for the week of high productivity and reprimand them for the low one?

A control chart—a time-series graph that has a center line and upper and lower control limits—can help answer these questions. The variation among data points on a control chart can be characterized as either "noise" or "signal." The "noise" of a process is the normal variation that is seen within the upper and lower control limits. It is a predictable outcome of the process. A "signal," by contrast, is unpredictable variation, caused by a change in the process—a new procedure, method, material, etc. Any data points that appear either above the upper limit or below the lower limit represent excessive variation and are a clear signal that something unique is occurring in the process.

While noise doesn’t require any supervisory action, a signal must be investigated. If it is a positive signal, incorporate the process changes into the procedure that governs the process. If the signal is negative, eliminate the cause(s) from the process.

The difference between noise and a signal is what makes data confusing. A meaningful comparison can’t be made from a single weekly value for net run speed. If you only know that the net speed of a machine is 9,500 pieces for the past week, it’s impossible to determine if the machine’s performance is good, bad or average. You must know the center line, as well as the upper and lower control limits of the process, before you can form an opinion. (For additional control-chart information, see "Understanding Variation: The Key To Managing Chaos," by Donald J. Wheeler; $35 at

The goal of the bindery supervisor is to:

  • Reduce the number of data points below/ above negative control limit

  • Improve the process by beating the average performance level as frequently as possible—eight or more weeks above or below the center line

  • Reduce process variation by managing predictable processes

Predictable throughput is the bindery supervisor’s friend. Scheduling and managing a process that has a great deal of variability is a frustrating endeavor and typically results in more failures than successes. By improving the process and reducing the level of unpredictability, the bindery supervisor can drive the production schedule, rather than operating at its mercy.

Share the information
Once you have good information, give it to the people who can use it. Update charts weekly, and ensure that all employees know where to find this information. Share and discuss the data at regularly scheduled department meetings.

Discussing the charts in a group setting adds value to the information. Operators looking at information on a bulletin board is one thing; having a group fully understand and agree on what they’ve seen is even better. And the group taking action on what they’ve learned is what truly matters.

SIDEBAR: The benchmarking difference
Benchmarking differs from production standards, time studies and manufacturer-supplied data in the following ways:

  • Data is gathered, compiled and used by the bindery supervisor. No additional staff or production standards are required.

  • It compares weekly performance data within the same department. Because the environment is constant, the effects of new techniques/ methods are readily apparent.

  • It measures the team speed of a function rather than individual performance.

  • It makes good employees better, because it measures their work every day and rewards documented achievement, rather than spot performance, political positioning or perceived achievement.

SIDEBAR: Scheduling: friend or foe in the bindery?
The biggest obstacle to implementing a bindery benchmarking program is the amount of time the typical supervisor spends on scheduling issues. A bindery supervisor who is expected to devote all of his or her time to meeting deadlines has no opportunity to improve the department’s efficiency. He or she is constantly juggling bindery crews and machine loading and trying to explain why things can’t happen as scheduled.

Production schedules should be based on net run speeds (good units produced/total hours worked). When predictions are based on production standards, rated speeds of the equipment, or a wish and a prayer, frustration results.

In the military, the difference between expected and delivered performance is called "ground truth." In the bindery, "ground truth" is what happens when facts contradict plans. It is not what management thinks is going to happen or what ought to happen. It is what really happens when bindery operators encounter obstacles such as mechanical problems, poor communication, bad press product, training issues, and missing or inferior supplies. These obstacles occur regularly and are therefore predictable and part of the ongoing process.

Benchmarking data includes these process variables. When supervisors concentrate on reducing and eliminating these variables, the bindery can produce higher levels and run ahead of schedule.

Here are some scheduling issues for your organization to consider:

  • How much time does the bindery supervisor spend on scheduling issues?

  • Does your company have a full- or part-time nonproduction person that performs some or all of the scheduling functions? Is the bindery supervisor duplicating these efforts?

  • Does your bindery receive a complete schedule by machine and shift? Or do you receive a "management wish-list" of jobs?

  • How are bindery scheduling changes handled?

  • How is bindery scheduling handled when the supervisor is on vacation?

  • In your production meetings, do you discuss what must happen, what should happen or how things will happen?

  • How accurate are your completion-time predictions?

  • What percentage of bindery work is performed when scheduled?

  • How are mechanically impossible scheduling overloads handled? When are these situations recognized?

Robert Diehl, executive vice president, Hederman Brothers (Ridgeland, MS), and Peter Doyle, operations manager, Action Printing, a Gannett Co. (Fond du Lac, WI), are the co-authors of "Benchmarking the Bindery" (copyright 2002, Kenet Media Inc.). The book lists at $249.95 ($199.95 for NAPL members). To order, call (800) 642-6275, option 3, e-mail or visit