volume 2 Archives - The Systems Thinker https://thesystemsthinker.com/tag/volume-2/ Sun, 08 Jan 2017 20:05:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Conference Begins Building a Foundation https://thesystemsthinker.com/conference-begins-building-a-foundation/ https://thesystemsthinker.com/conference-begins-building-a-foundation/#respond Tue, 23 Feb 2016 12:01:11 +0000 http://systemsthinker.wpengine.com/?p=4816 We are in the midst of dramatic changes that are redefining and reshaping our world. Western Europe is consolidating into one common economic market. Eastern Europe is being integrated into the western fold. The Soviet Union is disintegrating, with new sovereign states emerging in its place. The role of the U.S. as the dominant economic […]

The post Conference Begins Building a Foundation appeared first on The Systems Thinker.

]]>
We are in the midst of dramatic changes that are redefining and reshaping our world. Western Europe is consolidating into one common economic market. Eastern Europe is being integrated into the western fold. The Soviet Union is disintegrating, with new sovereign states emerging in its place. The role of the U.S. as the dominant economic engine of the world is diminishing while Japan ascends in its place. This is a unique time of opportunities and challenges. How well we respond to them will have repercussions well into the next century.

The theme of the 1991 Systems Thinking in Action Conference, “Building a Foundation for Change,” proposed a way to address the challenges that lie ahead. Both keynote speakers, Peter Senge and Jay Forrester, focused squarely on the critical issue of building capabilities and understanding that will endure throughout the turbulence that accompanies such periods of great change.

“We are embarking on one of the great frontiers of human endeavor…I see the frontier for the next 50 to 100 years as being a greater understanding of our social and economic systems.”

New Frontiers

“We are embarking on one of the great frontiers of human endeavor — similar to the founding of nation-states, the exploration of the surface of the earth, and the pursuit of scientific and technical knowledge,” declared keynote speaker Jay W. Forrester. “I sec the frontier for the next 50 to 100 years as being a greater understanding of our social and economic systems.”

The timing is auspicious, according to Forrester, because the economy is in the midst of a long wave downturn that will spark fundamental changes in our society (see “Not All Recessions are Created Equal,” February 1991 for more on the long wave and the current recession). “Long wave downturns are windows of opportunity for great social and economic change,” he explained. “Old methodologies have been overdone and overbuilt, and the institutions behind them have been swept away. The public is looking for change. During downturns, foundations are laid for new ideas and methodologies that will flourish during the next upturn.”

Over 250 people from across the U.S. and the globe came together for the conference to hear Senge’s and Forrester’s context-setting remarks, acquire systems tools and techniques for developing new skills, and learn from the experience of managers who have been on the forefront of building their companies’ foundations for change. Summaries of both keynote speeches follow.

Peter M. Senge — Transforming the Practice of Management

I think the primary institutional contexts that we need to consider are the world of management and the world of public education — the two primary institutions in our society. They represent fundamental areas for profound and extensive innovation.

Focusing on the corporate world, let me summarize what I see happening as a paradigm shift from the resource-based organization to the knowledge-based organization. According to Peter Drucker, there have been two previous major changes in the evolution of the organization: first, at the turn of the century, when management became distinguished from ownership; and then in the 1920s, when fundamental changes at DuPont and General Motors introduced the command and control organization of today. “Now we are entering a third period of change,” writes Drucker. “A shift from the command and control organization, the organization of departments and divisions, to the information-based organization, the organization of knowledge specialists.”

The evolution of the Total Quality movement in Japan is further evidence of a growing shift toward managing knowledge. The foundation of the first wave of quality management was in statistics. However, Dr. Edwards Deming, “father of Japanese management” has taken to saying lately that statistics is only 2% of the work. The new book he is working on crystallizes the essence of his management philosophy around four points. Interestingly, the first of Deming’s four cornerstones is appreciation of a system. Similarly, the second wave of Total Quality is focusing more on the area of linguistics and anthropology, because the world of management is the world of ideas. If the work of the front line of an organization is to continually improve the physical processes, then perhaps the work of management is to continually enhance the base of knowledge.

The Shift Toward the Knowledge-Based Organisation

The Shift Toward the Knowledge-Based Organisation

Multi-dimensional Aspects

An article in the most recent issue of the Harvard Business Review, written by Ikujiro Nonaka, summarizes quite well a variety of trends which are part of this shift. In his article, “The Knowledge-Creating Company,” he describes what this type of organization is like: “The centerpiece of the Japanese approach is the recognition that creating new knowledge is not simply a matter of processing objective information. Rather, it depends on tapping the tacit, and often highly subjective insights, intuitions and hunches, of individual employees and making those insights available for testing and use by the company as a whole.”

In characterizing the multidimensional aspects of the shift toward the knowledge-based organization, let me contrast five basic tasks at the resource-based organization and the knowledge-based organization: direction setting; thinking and executing; the nature of thinking; conflict resolution; and the role of leadership (see side-bar for an overview).

  • Direction Setting and Thinking and Executing. The key to success in the authoritarian, traditional organization is to have a few great thinkers at the top, design some good control systems throughout the organization, and get some good actors at the local level. The top thinks and the local acts. That’s the essence of a traditional organization.

In the knowledge-based organization, shared vision emerges from all levels. The responsibility of top management is to articulate a vision for the company that is broad enough so that workers can interpret and add to that vision, giving them the freedom and autonomy to set their own goals which will put that vision into action. Within this framework, thinking and acting can be merged at all levels of the organization.

  • Nature of Thinking. One of the central issues in the shift toward the knowledge-based organization is how power and authority will be distributed. Tight central control is not effective for working in a complex system. When you have people throughout an organization making important decisions, it’s absolutely critical that those people have some contextual knowledge or understanding of how their decisions affect others. Atomistic Winking, the hallmark of the resource-based organization, must give way to an understanding of the whole system. If we are going to distribute power and authority throughout the organization, we must also develop the necessary skills and capabilities for people to use that power and authority wisely.
  • Conflict Resolution. In the traditional model, mediating disputes usually means “the biggest stick wins.” In the corporation of the future, the emphasis must be on integrating diverse views and building shared mental models. The great range of differing perspectives and mental models in a corporation might seem like a big problem. In fact, it can be a rich source of new knowledge if a company knows how to challenge employees to continually examine their fundamental assumptions.
  • Role of Leadership. The fundamental task of the leader in the knowledge-based organization is to create conditions so that good decisions can be made throughout the entire organization. It is no longer enough to have great thinkers at the top who set • direction, motivate people, and make important decisions. In the corporation of the future, thinking and acting must occur at all levels. For this to happen, the key questions we need to address are: How do we design the learning processes? What are the tools we need? And what are the ways in which we can literally build shared knowledge so we can enable good decision making throughout the organization?

 

Building a Foundation

If we really accept this paradigm shift as something that is occurring, we need to ask ourselves some questions. From the standpoint of an individual organization, do we want to ignore it? Do we want to get dragged along kicking and screaming? Or do we want to be a leader, and be out on the leading edge helping to create that new model?

I propose four “levels of attention” that we must focus on to build a foundation for such an organization: philosophy, attitudes and beliefs, skills and capabilities, and tools and artifacts. Philosophy is the vision, values, and sense of purpose that we articulate. Attitudes and beliefs are those values that reside more at the tacit or unconscious level Artifacts, a term used by Buckminster Fuller, means those tools which, in helping us deal with practical and important issues, can shift our ways of thinking and interacting and actually influence and improve our skills and capabilities.

The greatest leverage lies in focusing both on the level of philosophy and the level of tools. I think the best strategy for building such an organization is to build from the top down and the bottom up simultaneously. Wherever I see an organization, West or East, that’s really made significant progress, there is a significant amount of tension on both levels.

“The fundamental task of the leader in the knowledge-based organization is to create conditions so that good decisions can be made throughout the entire organization.”

Jay Forrester — Education into the 21st Century

Our educational system is unrealistic in almost every way. Students work on solving problems that are artificial and irrelevant, in which neither the teacher or student is interested, and for which the facts have already been provided. That is just the opposite of the real world, where we start with a problem and then have to search for the necessary information and frame a solution.

The failure of our educational system has brought a great national lamenting, as well as many proposed solutions. A recent Fortune magazine, for example, listed over 100 companies that have donated money to reforming the schools. But the amount of money being given is relatively very small, and the tasks that the money is being earmarked for amount to doing more of what is already clearly not working. It is my intention to give you a glimpse of how the field of system dynamics can address some of the underlying reasons why education is not working and provide a framework that has great power, persuasiveness, and relevance for changing our current educational situation.

A New Framework

System dynamics turns upside down the American theory of education. In traditional education, children start out by learning facts. They then progress through learning how to analyze and break down problems, and then finally to synthesis — putting it all together.

System dynamics places synthesis at the beginning of the educational sequence. By junior high school, students already possess a wealth of facts about interpersonal relations, family life, community, and school. What they need is a framework into which those facts can be fitted. System dynamics simulation models allow the students to learn facts within the larger context of how the dynamics will play out in the real world.

Learner-Directed Learning

Currently there are probably about 30 high schools with substantial activity in bringing system dynamics simulations into the classroom, and about 300 schools that are doing something in this area. The greatest progress is being made in those schools where system dynamics is being used in conjunction with another important concept — learner-directed learning.

With learner-directed learning, students take on a substantial responsibility for what they learn and how they learn it. Operationally, this means that students work together in teams of two or three on a given project, while the teacher plays the role of a coach and advisor — someone who inspires and encourages them. This gives the 7th or 8th grade classroom the atmosphere of a research laboratory.

There seems to be no correlation between a student’s previous academic standing and his or her ability to do well in the new environment. Some students who have seemed “backwards” in the traditional classroom — who have had difficulty learning through memorizing facts or listening to lectures — may have a very keen understanding of how things interrelate and can do very well in the new setting.

In one school in Arizona, for example, an English teacher has the bottom-third track of students. Using the STELLA”‘ software by High Performance Systems (Hanover, NH), she created a system dynamics simulation of the psychological dynamics in Shakespeare’s Hamlet. For the first time, students began to discuss and debate issues among each other. They argued over which one of them was like which character in the play, and what ordinary people would have done in Hamlet’s shoes. They asked the teacher for quantitative changes in the character of an actor in the computer simulation so they could see who got killed instead. Those students are now going around asking why they can’t do similar projects in other classrooms, and they are starting to see themselves as the educational innovators in the school.

Innovation in Orange Grove

Probably the most advanced work with system dynamics and learner-directed learning has been taking place at the Orange Grove Middle School in Tucson, AZ. My mentor at MIT, Dr. Gordon Brown, has acted as a “citizen champion,” bringing system dynamics into the local school system. He first introduced one of the biology teachers at Orange Grove, Frank Draper, to the STELLA software. Frank began using the software to develop projects in his class, with some very exciting results. Two-thirds of the way through the semester, his class had already covered all of the required material for the semester. Because there had been so much excitement, dedication, and accelerated learning going on, the students absorbed the material much faster than ever before.

It was the first time the school had seen twelve- to fourteen-year-old students who wanted to come into school early or stay late to work on their simulations. Even without written assignments, the students would spend weekends researching information they would need for their next project. Discipline problems practically disappeared.

“It was the first time the school had seen twelve- to fourteen-year-old students who wanted to come into school early or stay late to work on their simulations.”

Since then, 200-300 students have been taught using this approach and teachers have seen an order of magnitude improvement in their abilities. As a sign of the school district’s further commitment and confidence in system dynamics and learner-directed learning, last year the district passed a $30 million bond issue to build a new high school which will be organized and taught along the ideas that have been pioneered in the junior high school.

Building a Network

Most schools that are working with system dynamics are isolated from one another. They don’t have knowledge of other programs being developed or work being done in other schools. They need the inspiration of knowing that a growing community of people are all working toward common goals. Programs are now starting that will support these pioneering schools and provide educational materials.

At MIT, I work with a group of undergraduates who are developing materials that can be used in high schools and junior high schools. They have been working with teachers in the Cambridge Rindge and Latin high school in Cambridge, MA as a field laboratory for developing and testing new materials. Also, John R. Bemis of Concord, MA, has made a very generous donation to set up an office that will act as an information interchange among schools that are pursuing this field. Called the Creative Learning Exchange, it will solicit material being developed in various places, reproduce it, and send it out to the schools.

Time Frame

I believe it will probably take 20 years for these ideas to be fairly widely embedded in the schools. It will be another 20 years after that for the students who have gone through these schools and learned the systems viewpoint to become active in politics and corporations. So at the very least we are talking about a forty-year time horizon.

Building any sort of a new foundation takes a lot of investment in time and energy. I don’t see the forty-year time horizon is anything to be pessimistic about. It’s a challenge, an opportunity. Along the way there will be many exciting discoveries, as there has been with the exploration of past frontiers.

The post Conference Begins Building a Foundation appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/conference-begins-building-a-foundation/feed/ 0
Escalation: The Dynamics of Insecurity https://thesystemsthinker.com/escalation-the-dynamics-of-insecurity/ https://thesystemsthinker.com/escalation-the-dynamics-of-insecurity/#respond Tue, 23 Feb 2016 11:53:11 +0000 http://systemsthinker.wpengine.com/?p=4810 Have you ever been caught in a situation where you felt that things were going well beyond what you intended, but you felt powerless to stop it? As a child, perhaps, in the playground at school — a classmate makes a snide comment, and you counter with a sharp retort. The next round of insults […]

The post Escalation: The Dynamics of Insecurity appeared first on The Systems Thinker.

]]>
Have you ever been caught in a situation where you felt that things were going well beyond what you intended, but you felt powerless to stop it? As a child, perhaps, in the playground at school — a classmate makes a snide comment, and you counter with a sharp retort. The next round of insults gets uglier and louder. You each stick your neck out further and further with every remark. Classmates gather around and egg on the escalation of hostilities. Pretty soon, you arc so far out on a limb that there is little else left to do but succumb to the chanting that has begun all around you — “Fight! Fight! Fight!”

The Dynamics of Insecurity

At the heart of an escalation dynamic are two (or more) parties, each of whom feels threatened by the actions of the other (see “Escalation Archetype” ). Each side attempts to keep things under control by managing its own balancing process. Actions taken by A, for example, improve A’s result relative to B. This decreases A’s feeling of threat, so A eases off its activities (B 1). B, on the other hand, now feels threatened by A’s relative advantage and increases its activities in order to improve its result over A (B2). The interaction of the two parties trying to unilaterally maintain control produces a reinforcing spiral in which nobody feels in control.

In school, a few harsh words can quickly lead to a playground brawl. In a more deadly confrontation, the escalation structure can lead to catastrophic consequences. The Cuban Missile Crisis in October of 1962, for example, caught U.S.. President Kennedy and Soviet Chairman Khrushchev in an escalation structure that led their countries to the brink of nuclear war.

The crisis began with the discovery of offensive nuclear weapons being constructed in Cuba — contrary to repeated public assurances by the Soviet chairman. The U.S. called for complete dismantling and withdrawal of the missiles. The Soviets first denied the existence of any such missiles. Then they acknowledged the missiles but refused to remove them, claiming they were defensive. Kennedy responded by ordering a naval blockade around Cuba to prevent more missiles from being shipped. Tensions were high. The Soviets pressed for accelerated construction of the missiles already in Cuba. The United States massed over 200,000 troops in Florida to prepare for an invasion.

When a United States U2 reconnaissance plane was shot down over Cuba, Kennedy’s advisors unanimously proposed launching a retaliatory strike. But Kennedy stopped short. “It isn’t the first step that concerns me,” he said, “but both sides escalating to the fourth and fifth step. And we won’t go to the sixth because there [will be] no one around to do so.” Had Kennedy not broken the escalation structure at that juncture, the forces unleashed might have been beyond anyone’s control to stop.

Escalation Archetype and Price Wars

Escalation Archetype and Price Wars

De-escalation

The Cuban missile crisis was one incident in a larger dynamic — the Cold War. Although that particular crisis was resolved, it did nothing to defuse the mutual distrust between the two countries, so the ‘arms race continued (see “Arms Race” diagram). The balance of power shifted over time as each side built more arms in response to a perceived threat from the other. Yet, the very act of building arms to “balance” the situation only led to further threat, which strengthened the other side’s “need” for even more arms.

It takes two to have an arms race, but only one to stop it. Unilateral action can break the escalation dynamic by robbing it of its legitimacy. If one side stops building arms, the source of threat diminishes, giving the other side less reason to invest in more arms. The escalation can then run in reverse. A recent newspaper headline, “Gorbachev escalates arms cuts,” shows how the arms race is now being driven rapidly in reverse.

Price Wars

Price Wars

Wars on Many Fronts

Escalation dynamics, because they thrive in a competitive environment, are pervasive in business. The common logic is that whenever your competitor gains, you lose (and vice versa). That logic leads to all kinds of “wars” — price wars, advertising wars, rebate and promotion wars, salary and benefit wars, labor and management wars, divisional wars, marketing vs. manufacturing department wars, and so on.

At the core of each of these wars is a set of relative measures that pits one group against another in a zero-sum game. In a typical price war (see “Price Wars” diagram), company A wants to “buy” market share by cuttings its price. As its sales and market share increase, B’s market share decreases. B retaliates by slashing its prices, generating more sales for B at the expense of A’s sales. In the short run, consumers may benefit from low prices. But in the long term, everyone may lose, since depressed prices mean less ability to invest in new product development, customer service, and overall attractiveness for the next round of competition.

Reversing or stopping such price wars is difficult. As competitors, A and B cannot collude to set prices. Nor is either company likely to stop unilaterally, since in the absence of other distinguishing features, the market usually favors the one with the lower price. In the heat of battle, a company can easily get locked into one competitive variable, such as price, and neglect to emphasize other strengths. Texas Instruments learned that lesson the hard way. Even though Texas Instruments had a superior technical product, it had to write off its entire personal computer business (the 1199/ 4A) as a result of a vicious price war with Commodore.

Insecurity

As the term “threat” suggests, the escalation archetype is about insecurity. In our playground example, the name-calling threatens our reputation and makes us insecure about our identity. The Cuban Missile Crisis and the arms race threatened the national security of both countries. Engaging in a price war reveals each company’s insecurity about its ability to hold on to customers on a basis other than price.

If you find yourself caught in an escalation dynamic, drawing out the archetype can help you gain some perspective. The following questions are useful for identifying escalation structures. With advance knowledge, you can design strategies around them or use them to your advantage:

  • Who are the parties whose actions are perceived as threats?
  • What is being threatened, and what is the source of that threat?
  • What is the relative measure that pits one party against the other — and can you change it?
  • What are the significant delays in the system that may distort the true nature of the threat?
  • Can you identify a larger goal that will encompass the individual goals?
  • What are the deep-rooted assumptions that lie beneath the actions taken in response to the threat?

The description of the escalation archetype is based on the systems archetypes presented in The Fifth Discipline by Peter M. Senge, Doubleday 1990.

The post Escalation: The Dynamics of Insecurity appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/escalation-the-dynamics-of-insecurity/feed/ 0
Stress Management–Whose Job Is It? https://thesystemsthinker.com/stress-management-whose-job-is-it/ https://thesystemsthinker.com/stress-management-whose-job-is-it/#respond Tue, 23 Feb 2016 11:45:01 +0000 http://systemsthinker.wpengine.com/?p=4808 Stress management — long the domain of psychiatrists and therapists — is increasingly being recognized by corporate America as a critical workforce issue. The reason is partly financial. Stress directly affects the bottom line — in lower productivity, lost time due to sick days, turnover, higher defect rates, higher medical costs, and more. “It has […]

The post Stress Management–Whose Job Is It? appeared first on The Systems Thinker.

]]>
Stress management — long the domain of psychiatrists and therapists — is increasingly being recognized by corporate America as a critical workforce issue. The reason is partly financial. Stress directly affects the bottom line — in lower productivity, lost time due to sick days, turnover, higher defect rates, higher medical costs, and more. “It has been estimated that output could be boosted by at least 10 percent if the work loss and impaired job performance attributable to mismanaged stress were eliminated,” notes Jack Homer, a system dynamicist and management consultant who has studied worker burnout. And, according to Fortune magazine, stress-related illnesses cost twice as much as other workplace injuries: more than $15,000 in medical treatment and lost time.

Though there is much agreement today that stress is a problem, there is little accord on what causes it or how to address it. How can workers and managers deal effectively with the rising stress level in organizations? Will the current batch of remedies be effective over the long term? Is stress just a natural part of organizational life, or are there leverage points for changing the structure of the workplace in order to reduce or eliminate the sources of stress? Beneath all these issues lie deeper questions about what we value as individuals, organizations, and as a society.

Stress Management 101

One working definition of stress is “when you begin every day with 30 hours worth of ‘stuff’ to finish by the end of a 24 hour day.” Some symptoms of stress you can feel immediately — shortness of breath or stomach cramps — which cry out for immediate action. Other symptoms — ulcers, hypertension, sleep disorders — develop over a long period of time. One popular remedy for feeling “stressed-out” is to exercise (see “Shifting the Burden of Stress Management” diagram). As you exercise, you relieve tensions in your body and begin to feel less stressed (B1). Another, more fundamental response to stress is to learn how to better manage your workload. As you increase your ability to manage your workload, the stress will decrease (B2). But it takes time before the payoff appears. And since exercise programs relieve stress relatively quickly, the impetus for managing the workload disappears, making workload reduction seem unnecessary.

Shifting the Burden of Stress Management

Shifting the Burden of Stress Management

Although the exercise loop (or other coping loops) are effective in relieving the symptoms of stress, the perverse side effect is that they also increase your capacity to handle more stress. Those 30 hours worth of work seem less intimidating if you are pumped up and full of energy. But eventually, the stress level will climb along with the growing workload, reinforcing your need for more exercise (RI). If twice a week is not enough, make it five. If needed, go on weekends. Have a knot in your stomach? Take some stress tablets. Feeling a bit edgy and anxious? Sign up for a gymnastics class. Suffering from malaise and lethargy? Enroll in a stress management workshop. There is an endless supply of quick fixes available. Each will help in the short term, but to the extent that they only increase our ability to handle more stress, they just reinforce the problem. Intimately, this prolonged dynamic contributes to burnout (See “Burnout” sidebar).

The Delegation Shuffle

If we are serious about tackling stress by reducing our workload, the obvious solution seems to be to delegate more. The problem with delegating, however, is that it does nothing to reduce the overall workload of a department or division — it simply shifts the work burden onto another person (see “The Delegation Shuffle” diagram) As each person finds himself overwhelmed by the workload, he delegates more responsibilities to the next level. That raises the workload of the person at the next level until she becomes overworked and begins to delegate work to someone else. The flow of work trickles down, leaving each person working at maximum capacity — and usually burying the front line worker.

Burnout

BurnoutPerhaps the most well-known manifestation of stress in the workplace is “burnout,” the total mental and physical exhaustion that comes from overwork. Burnout results from an individual’s continual attempts to fulfill unmet expectations by working longer hours. Longer hours, however, mean more exposure to the normal stress of work, which drains the individual’s store of energy and leaves less time to replenish that energy. Over time, this energy drain may render the individual less capable of reaching her goals, leading her to work even longer hours to try to “catch up.” The result is a vicious cycle of unfu011ed expectations, overwork, and eventual burnout (R1 and R2).

Given the structure of the burnout cycle, prevention appears to be simple: lower our expectations and reduce the workload (B2). But that is not enough, says Homer. Once an individual reduces his workload to a reasonable level, his modest expectations and rising energy enable him to finish work at a satisfying rate without putting in a lot of hours. This euphoric period of “coasting” is short-lived, however. Why? Because such satisfaction is unnatural for R2 workaholics (or those in workaholic environments) and only encourages Perceived Expected them to 8 Adequacy of B2 expand their Accomplishment per Week goals and begin the cycle again.

If the tasks are not delegated with a clear purpose and understanding of the skills needed, individuals may find themselves assigned to tasks that they are not ready to handle. This increases their stress and their workload, forcing them to shift even more responsibilities downstream. As more people are called on to do tasks they are not prepared for, the number of mistakes and crises rises, productivity decreases, and the stress level increases throughout the system. The result is that the stress that was shifted out of the system feeds back in on itself, creating a vicious cycle of increasing stress.

Toward Stress Prevention

One of the problems with the way stress has traditionally been handled in corporations is that it has been treated as an individual problem. Companies offer employees stress management programs, relaxation classes, and other methods for dealing with stress. But these programs are symptomatic solutions at best. By providing and endorsing such programs, companies are teaching employees how to “cover up” underlying stress and sending an unspoken message to individuals that they are to blame for the stress they feel.

Stress management programs rarely address the organizational sources of stress. “How can it possibly make sense for a company to soothe employees with one hand — teaching them relaxation through rhythmic breathing — while whipping them like egg whites with the other, moving up deadlines, increasing overtime, or withholding information about job security?” asks Fortune magazine. And Jack Homer points out that frustration is at the heart of burnout: frustration from unclear roles and responsibilities, a poor job fit, inadequate rewards and employee support, or deadline pressure.

The key question for dealing with stress on an organizational level is: how do we as an organization create environments which are either stressful or stress-free? To answer this question, we need to look at how organizational structures are contributing to the very stress we are trying to eliminate. For example, if the reward systems are set up with the single-minded goal of maximizing quarterly results, the result could be rising stress levels and more incidences of burnout throughout the organization, which can end up actually reducing overall productivity.

Sometimes the corporate culture is the culprit. In The Fifth Discipline, Peter Senge tells the story of a friend who fruitlessly tried to reduce burnout among professionals in his rapidly growing training business: “He wrote memos, shortened working hours, even closed and locked offices earlier — all attempts to get people to stop overworking. But all these actions were offset — people ignored the memos, disobeyed the shortened hours, and took their work home with them when the offices were locked. Why? Because an unwritten norm in the organization stated that the real heroes, the people who really cared and who got ahead in the organization, worked seventy hours a week — a norm that my friend had established himself by his own prodigious energy and long hours.”

This story illustrates an important lesson for developing effective organizational responses to stress: no stress reduction program will be effective if the organizational norms, systems, and culture do not support that goal. Creating a long-term strategy for dealing effectively with stress will mean rethinking some of our traditional ways of thinking about and dealing with stress.

Lessons from the Army

Surprisingly, some effective techniques for creating built-in mechanisms to reduce stress in organizations come from the army. According to Fortune magazine, “The number one psychological discovery from World War II is the strength imparted by the small, primary work group.” Having stable work groups becomes even more important when a company is undergoing a major upheaval such as a downsizing. A tightly-knit work group can help lend cohesion and continuity to an otherwise uncertain workplace. In addition, army research also found that keeping a work group together after a battle (or a downsizing) is crucial, too, “since members, by collectively reliving their experience and trying to put it in perspective, get emotions off their chests that otherwise might leave them stressed out for months or years.”

Another effective method for reducing stress throughout the workplace Workload is institutionalizing methods for taking time off, to help employees maintain a balanced Stress perspective and stable energy level. The five-day workweek with weekends off is one formalized structure for building in stress-relief valves. So are holidays and vacation days. Prolonged periods of working longer hours and on weekends, however, often “short-circuit” such safety measures.

“Time Outs”

Institutionalized “time outs” could go even further. In the U.S., vacation days (and time off in general) are viewed as lost production time, and thus an added cost. Given this mental model, it is not surprising that in a recent report, the U.S. was ranked at the bottom of a list of industrialized countries on the average number of vacation days per year. The U.S. (along with the Japanese) had 10 vacation days versus 20-30 days for several European countries.

If time off was viewed more as an investment than a cost, vacations could be seen by corporations as an important ally for keeping their employees productive and stress-free. Manufacturers know that investments in maintenance programs and planned down times for their equipment pay off in fewer breakdowns and longer useful lifes. Are we guilty of treating our machines and physical plants better than our employees simply because it is easier to measure the effects of overwork on machines than on people?

The Delegation Shuffle

The Delegation Shuffle

Ultimately, stress management comes down to making some fundamental decisions about what we value — as both companies and individuals. As individuals, do we see our work as consistent with our goals and our other roles — as parents, community leaders or church leaders? As companies, how do we view our role in society — as employers of resources, producers of goods, suppliers of services, enhancers of human potential, or vehicles for social progress? Do we value our employees and see them as whole individuals — with families, concerns and experiences that affect and are affected by their work? Answering these questions will provide a starting place for addressing the issue of stress at a fundamental, not symptomatic, level. The portions of this article on burnout were adapted from Jack Homer’s article, “Worker Burnout: A Dynamic Model with Implications for Prevention and Control” which appeared in the summer 1985 issue of the System Dynamics Review.

The post Stress Management–Whose Job Is It? appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/stress-management-whose-job-is-it/feed/ 0
From Causal Loops to Graphical Functions: Articulating Chaos https://thesystemsthinker.com/from-causal-loops-to-graphical-functions-articulating-chaos/ https://thesystemsthinker.com/from-causal-loops-to-graphical-functions-articulating-chaos/#respond Tue, 23 Feb 2016 11:01:28 +0000 http://systemsthinker.wpengine.com/?p=4805 This month we continue our look at Graphical Function Diagrams (GFD). GFD’s help us visually see how two variables are interrelated by plotting the relationship between the two over the range of relevant values. In Chaos: Making a New Science (Penguin Books, New York), James Gleick describes a relatively new branch of science that has […]

The post From Causal Loops to Graphical Functions: Articulating Chaos appeared first on The Systems Thinker.

]]>
This month we continue our look at Graphical Function Diagrams (GFD). GFD’s help us visually see how two variables are interrelated by plotting the relationship between the two over the range of relevant values.

In Chaos: Making a New Science (Penguin Books, New York), James Gleick describes a relatively new branch of science that has profound implications for how we view our world. Chaos, simply put, is the science of seeing order and pattern where formerly only the random, erratic, and unpredictable had been observed. In a way, systems thinking also deals in the science of chaos. Diagrams such as causal loops, accumulators and flows, and graphical functions arc all ways of extracting the underlying structure from the “noise” of everyday life.

Relating Behavior to Structure

Both systems thinking and chaos insist that real-world phenomena need to be described in “real” terms that match our intuition. Writing partial differential equations to describe clouds, for example, misses the point, because we don’t perceive clouds in that way. It is not enough to have a model that reproduces some real-world phenomena if we cannot identify the structures that produce the behavior of the actual system. That’s why systems thinking diagrams focus on capturing reality in a format that taps into our intuitive understanding of the systems which we manage and live in.

Savings Loop

Savings Loop

If we begin to explore our savings account “system” by drawing a causal loop diagram, we see that an increase in savings will lead to more interest earned, which increases our savings balance still further (left). The graph of this behavior over time will look something like the exponential growth curve (right).

From Causal Loops to GFD’s

To see how a range of systems thinking tools can help capture the structure of a system at increasing levels of detail, let’s look once again at a system we are all familiar with — a savings account. If we plot out the structure of a savings account using a causal loop diagram (see “Savings Loop”), we see that an increase in savings will lead to more interest earned, which increases our savings balance still further. The graph of this behavior over time would look something like the exponential growth curve shown on the right of the diagram.

“Wait a minute,” you may protest, “I don’t know whose bank account that is, but it certainly doesn’t look like mine!” That’s true — rarely is a system so simple in real life; nor are bank accounts that well-behaved. There are usually many other factors involved. The question of how many factors to include always depends on the purpose of examining the system. Since the details of any system are infinitely complex, it is futile to strive to “model the system.” In our sample case, the purpose is to represent as concisely as possible the important factors which affect the balance of a typical savings account, so we want to look at savings, income, interest earned, and spending (see “Savings & Spending Loops”). If we were only interested in capturing the fact that there is a balancing loop that explains the slowdown in the growth of our savings account, we could stop at this point. On the other hand, if we want to be more explicit about the structure behind the behavior, we need to translate our diagram into accumulators and flows.

Savings & Spending Loops

Savings & Spending Loops

A balancing loop explains the slowing growth of the savings account: as our savings increases, we are more likely to increase spending, which will reduce our savings.

Accumulators and Flows

When we translate CLD’s into Accumulators and Flows, we are becoming even more precise about the structures producing the dynamics. The bathtub as a metaphor for accumulations (see “Accumulators: Bathtubs, Bathtubs Everywhere,” Toolbox, February 1991) helps us visualize how concepts as diverse as savings, pollution, customers, and corporate reputation share a similar underlying structure.

Accumulators and flows add more detail and understanding to our causal loop diagram by differentiating between those variables in the diagram that “accumulate” (our savings balance) and those that just “flow” through the system (income and spending). In the “Savings as an Accumulator” diagram, we can visually see money flowing into and out of savings in the form of income and spending. More importantly, we can relate to this structure intuitively because we experience money in terms of flows and accumulations (or lack thereof!).

Graphical Functions: Mapping Policies

So now we have a pretty good idea of both the basic dynamic behavior of the savings account, and a feel for the important inflows and outflows. But our model is still pretty elementary. Suppose now you wanted to go a little further and use a systems diagram for describing your family’s policy for managing your savings. “Our discretionary spending depends on how much savings we have,” you explain. “If the balance in our savings account is below $5000, we don’t spend a dime. As our savings rise above $5000, we may increase discretionary spending by, say, $15-20 per month. If our savings tops $10,000, then we’re likely to spend several hundred dollars a month. But in any case, we don’t see ourselves spending more than $500 per month on discretionary expenses.”

Savings as an Accumulator

Savings as an Accumulator

Graphical functions allow us to expand our exploration of a system to include policies and interrelationships between variables. If we tried to capture the savings plan we described above in an analytical form we would have to do quite a bit of work in order to come up with a suitable equation. And when we were done, it would be hard to tell if the equation represented our savings account or the number of widgets on sale at Wal-Mart. The truth is, most of us don’t think in terms of abstract mathematical concepts, but in images and structures grounded in our everyday experience. That’s why graphical functions are useful. They capture policies in an intuitive way through a simple graph that maps out the relationship between one variable in relation to another (see “Savings Policy Graphical Function Diagram”). In our savings policy plan, for example, we see at a glance that savings has no impact on our discretionary expenses until savings hits $5,000. After that, discretionary expenses rise until savings reaches $20,000, at which point they level out at $500.

Savings Policy Graphical Function Diagram

Savings Policy Graphical Function Diagram

Artistic Managers

Physicist Mitchell Feigenbaum suggests that art is a theory about the way the world looks to human beings. “It’s abundantly obvious that one doesn’t know the world around us in detail. What artists have accomplished is realizing that there’s only a small amount of stuff that’s important, and seeing what it is.” Whether we recognize it or not, we are artists as well, selectively picking out details of the world which we choose to focus on. Those details appear as items on our production reports, financial statements, and customer surveys. To the extent that those details do not capture the core structures that are important, we may be the unwitting producers of our own chaos. As one systems thinking maxim warns, “It ain’t what you don’t know that hurts you, it’s what you DO know that ain’t so.”

The post From Causal Loops to Graphical Functions: Articulating Chaos appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/from-causal-loops-to-graphical-functions-articulating-chaos/feed/ 0
Rethinking Workforce Planning Criteria https://thesystemsthinker.com/rethinking-workforce-planning-criteria/ https://thesystemsthinker.com/rethinking-workforce-planning-criteria/#respond Tue, 23 Feb 2016 10:51:14 +0000 http://systemsthinker.wpengine.com/?p=4781 In 1989, Digital Equipment Corporation in the United States faced business conditions unique in its 30-year history. Historically, Digital had been a high-growth company in a high-growth industry. To keep up with the rapid growth, Digital hired a large number of people between 1983 and 1987 (sec “Workforce and Revenue Trends” graph). The large number […]

The post Rethinking Workforce Planning Criteria appeared first on The Systems Thinker.

]]>
In 1989, Digital Equipment Corporation in the United States faced business conditions unique in its 30-year history. Historically, Digital had been a high-growth company in a high-growth industry. To keep up with the rapid growth, Digital hired a large number of people between 1983 and 1987 (sec “Workforce and Revenue Trends” graph). The large number of new hires, as well as Digital’s traditionally low turnover, allowed the workforce to keep up with the rapid pace of change in the industry.

But a sluggish U.S. economy had resulted in a slowdown in information systems spending, Digital’s marketplace. At the same time, the emergence of Open Computer systems and desktop computing threatened Digital’s mainstream revenue source, its proprietary operating system. As new product lifecycles in the computer industry shrank, customers became more likely to hold off on upgrades or purchases until the next generation of hardware emerged.

These conditions meant unique challenges for Digital’s workforce planning team. They needed to not only shrink Digital’s high cost structure, but also position the workforce to take advantage of new areas emerging in the industry. In addition, Digital’s evolving business mix — towards systems integration, solution selling, and customization — meant new skill requirements for the workforce.

Digital had entered a new business arena, but its workforce was not well-positioned to meet the demands of the marketplace. The challenge was to reexamine the distribution as well as the flow of people within the different business functions, and to match people with the appropriate skills to emerging business areas.

Workforce and Revenue Trends

Workforce and Revenue Trends

There were many different opinions about Digital’s workforce challenges. Many people felt that the downturn was temporary and not indicative of any fundamental trend. Their belief was that Digital would “outgrow” the revenue slowdown, as it had in the past. Some felt the only solution was dramatic downsizing across the board, to bring expenses more in line with revenues. Still others, again believing the problem was an oversized workforce, were advocating less dramatic actions such as a hiring freeze or a moderate workforce cut. The multitude of perceptions within the company indicated that there was no consensus on the issues involved, the alternatives, or the long-term impact of various workforce policies.

To address these issues, our internal consultants, along with the new U.S. Workforce and Organizational Planning Manager, applied a systems thinking approach to better understand the workforce planning challenges that faced Digital.

The key questions we addressed were:

  • How can we categorize our workforce in order to gain a more meaningful look at the underlying dynamics within and across all functions?
  • What would be the impact on our workforce capability and corporate profitability if we implemented each of the following policies: continue with “business as usual,” maintain current headcount, or implement across-the-board cuts?
  • What are the important factors we need to consider when planning for future workforce needs?

(Editor’s note: Readers may want to try their hand at answering the above questions and perhaps identify analogous “accumulator management” challenges in their own organizations”)

The Digital U.S. Professional Workforce Planning Project was initiated to explore key issues of workforce allocation as they related to Digital’s high cost structure and evolving business mix. The challenge was to find a set of human resource policies that would allow Digital to maintain a workforce that was flexible and adaptable to the current and future business environment.

We began by making what we called an “intelligent categorization” of our U.S. professional workforce — an inventory of our employees by function and by level. Within each product or service function, we divided the workforce into four levels according to skills: entry level, junior level, midlevel individual contributor/first line manager, and senior individual contributor/senior manager (an individual contributor is someone who could have the same level of responsibility as a manager, but does not supervise other employees). We realized, however, that the categorization was only the first step. Our inventory gave us no idea of the movement of people throughout the organization, or the effects of human resource policies on their movement.

From the initial categorization, we created a computer simulation model (using STELLA modeling software) that would allow us to gain insight into the flow of people between functions and between skill levels within functions. The model was very “primitive” in the sense that we tried to build the simplest model that would capture the relative dynamics. For each function, we looked at the different skill levels as accumulators. The major policies that determined the flows into and out of the accumulators were hiring, promotion, and attrition rates (see “Workforce Planning Model Overview” diagram).

Once we felt the model captured the relevant dynamics, we began to explore the long-term impacts of various policies.

Status Quo Scenario

The first scenario we looked at was the status quo policy: what would happen in five years if we continued our current year’s hiring, promotion, and attrition policies? This meant little hiring except in critical areas, and following a very conservative growth strategy. The status quo scenario tested the perception held by some managers that our current strategy of managing the total headcount, without attention to the mix between different levels, would be effective for the long term.

Status Quo Policy

Status Quo Policy

The resulting picture was clearly not where we wanted to be in five years. Although total headcount remained fairly steady, a look at the number of employees between different levels revealed a large mix imbalance (see “Status Quo Policy” graph). The mid- and junior-level headcount decreased, while the entry and senior levels swelled. The simulation revealed that the status quo policy, designed to control costs, would actually raise our cost structure because of the increase in higher-salaried employees. Even a very conservative estimate of the resulting costs showed a large impact on Digital’s profitability.

In addition, the mix imbalance would have a tremendous effect on the skills and responsibilities of the workforce. If we did not implement dramatic changes that were consistent with the new workforce mix (such as an expansion of responsibilities in the senior and entry levels), the result would be an underutilized workforce at the senior level and an overtaxed workforce at the junior level.

Equilibrium Scenario

The second strategy we tested was “what would it take to keep the total headcount and mix balance the same in five years as it is today?” The assumption behind this scenario was that what we have now in terms of a workforce mix is adequate for our future needs.

In order to simulate this scenario, we identified areas in the company that are growth areas and built in a modest growth trend. In addition, we maintained or flattened areas that are not critical business units. In this way, the total headcount across Digital U.S. and the headcount within each skill level would stay the same, but the headcount between different functions would vary. The growth in key areas would bring in more revenues, which would shrink the current discrepancy between our cost structure and revenue stream.

Again, the simulation showed surprising results. In order to maintain our current headcount and skill mix, we would have to make dramatic changes in attrition and hiring policies across the company. Pursuing this strategy would require a lot of intervention in the system, something that most managers are reluctant to do.

Workforce Planning Model Overview

Workforce Planning Model Overview

A structural diagram of the workforce planning model captures the movement of employees through different seniority levels within a function. The flows of employees are determined by hiring, promotion, and attrition policies.

Across-the-Board Cuts

Another scenario we looked at was the effect of across-the-board policies on the headcount and skill level mix in various functions. Many managers have expressed the opinion that the solution to our current difficulties is to execute dramatic or moderate cuts across all functions. While this policy would generate immediate costs savings, we wanted to gauge the effects of such action over time.

The results of the simulation were similar to the “status quo” scenario. There was a wide variance in the resulting skills mix across different functions, which suggested that managing total headcount alone was not a viable long-term option. We need to look at other dimensions, such as the skills mix within functions, in order to maintain a flexible, productive workforce.

Our conclusion from this scenario was that different functions have different workforce dynamics, and the policies we pursue need to be sensitive to those differences and the different growth rates in those functions. For example, our computer systems are more reliable than they were 10 years ago, so we don’t need to allocate as many engineers to our repair function. At the same time, the emerging area of systems integration requires a more mature workforce with different abilities, such as project management and consulting skills, so we need to target a different workforce population for that function.

Rethinking Future Planning

The results of the various scenarios stimulated some insights that will help us rethink our criteria for future workforce planning:

 

  • We cannot simply implement a blanket policy across all functions. A knee-jerk reaction to increased demand or revenue shortfalls — cutting or adding staff across all functions — doesn’t take into account the different dynamics that are operating within each function. A reduction that increases efficiency in one area could rob an emerging area of the vital resources it needs to grow.
  • Managing total headcount is not the answer. Total headcount is one of the easiest parameters to measure and control. But “management by headcount” excludes other vital issues in workforce planning, such as maintaining an appropriate skills mix within and across different functions.
  • Hiring at junior level must continue. Even when trying to maintain current employee levels, it is important to keep hiring at the junior levels. This action prevents a “top-heavy” workforce, and injects new skills into the organization.
  • Changing promotion policies is key to creating a more flexible workforce. A flexible workforce that can respond quickly to emerging new areas is crucial in a fast-evolving industry such as computers. One way to gain a more flexible workforce is to create more lateral career paths, which will allow employees to gain more experience and responsibility without changing the mix of levels in the company. Modifying promotional policies is a high-leverage action for managing the flow of people and the mix of skills throughout the organization.

 

Planning Future Needs

The simulation model and the insights it has generated are now being shared with the U.S. Human Resource Management community. Our primary goal in the next phase of the project is to provide a systems framework for understanding the underlying causes of our revenue slowdown as it relates to workforce management, and to stimulate a rethinking of what constitutes high-leverage actions for planning our future workforce needs.

Naila Seif is an internal management consultant at Digital Equipment Corporation who has worked on projects ranging from sales and marketing to product development. She and Internal Consultant Rob Greenly provided the primary consulting support on this project.

The post Rethinking Workforce Planning Criteria appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/rethinking-workforce-planning-criteria/feed/ 0
Accumulation Management: Avoiding the “Pack Rat” Syndrome https://thesystemsthinker.com/accumulation-management-avoiding-the-pack-rat-syndrome/ https://thesystemsthinker.com/accumulation-management-avoiding-the-pack-rat-syndrome/#respond Tue, 23 Feb 2016 10:42:45 +0000 http://systemsthinker.wpengine.com/?p=4779 I once read a story about a trivia “pack rat,” a man who had spent his entire life memorizing trivia. He knew baseball statistics of every player in the history of the major league. He had memorized the titles, directors, and actors of hundreds of movies. He knew the name of every television show that […]

The post Accumulation Management: Avoiding the “Pack Rat” Syndrome appeared first on The Systems Thinker.

]]>
I once read a story about a trivia “pack rat,” a man who had spent his entire life memorizing trivia. He knew baseball statistics of every player in the history of the major league. He had memorized the titles, directors, and actors of hundreds of movies. He knew the name of every television show that had ever aired.

But one day he found himself in an awkward predicament — no matter how hard he tried, he could not memorize another bit of trivia. He had finally taxed the limits of his rote memorization capacity. Although he had worked hard at acquiring his stock of trivia throughout his life, he had never considered how he might go about depleting it. He had not learned the fundamentals of accumulator management.

Pack Rats and Nomads

Life can in some ways be viewed as a never-ending task of managing various accumulators. Our pantries, refrigerators, checking accounts, and closets are among the many accumulations we manage daily.

Insurance Business as Accumulation Management

Insurance Business as Accumulation Management

The insurance business can be mapped into a relatively simple diagram that highlights the major accumulators and flows. If we assign numbers next to each accumulator or flow indicating the percentage of organizational resources devoted to it, the diagram can help re-evaluate the organization’s current emphasis.

On one end of the accumulation management spectrum is the pack rat who throws nothing away. On the other end is the “nomad” who makes a virtue of owning no more than what can be packed into one suitcase. In between these two extremes lies the majority of the population who are constantly struggling to maintain the right balance between acquisitions and depletions.

Anatomy of the Accumulator Management Structure

A typical Accumulator Management Structure (AMS) has the following elements: the Accumulation, Acquisitions, Depletions, Desired Accumulation, and a Corrective Action (see “Accumulator Management Structure” diagram). In addition, there is almost always some delay between the Corrective Action and the Acquisition, because it takes time to actually memorize data or clear out the closet once we have decided to do so.

Accumulator Management Structure

Accumulator Management Structure

In its simplest form, accumulator management can be viewed as a balancing loop with delay (top). A structural diagram (bottom) reveals that the flows controlling the accumulation are acquisitions and depletions.

The Accumulator Management Structure is a generic structure that can represent a wide range of business settings where accumulation management is important. For example, the insurance business can be mapped into a relatively simple diagram by focusing on the basic accumulators and flows (sec “Insurance Business as Accumulation Management” diagram). Insurance revolves around managing two main accumulators—policyholders and investments.

If managers assign a number next to each accumulator and flow in the diagram to represent the percentage of organizational resources devoted to each, the diagram can highlight which areas receive the largest organizational focus. This exercise can point out any weaknesses in the current organizational emphasis — for example, spending too little time trying to retain current policyholders — and reveal ways in which the company can better serve its customers.

Supply Lines and Delays

If we have direct and immediate control over all the elements in the AMS diagram, managing accumulations would be simple: we would calculate what the depletion rate is, set our desired accumulations accordingly, and implement actions that will immediately result in acquisitions. In our home life we already pretty much follow this pattern. For example, we plan our meals, decide on an appropriate amount of food to have on hand, figure out how long it will be before we run out of certain staples, and go to the grocery store as needed. Unfortunately, things are not that straightforward when we move into the organizational context.

One of the most challenging aspects of managing accumulations within organizations is captured in one word — delays. Identifying and characterizing the nature and source of delays often plays a critical role in managing accumulations effectively. A big part of the problem is that we usually have very little control over the supply line delay.

Supply Line and Delay in the Beer Game

Supply Line and Delay in the Beer Game

The structure of the inventory management system in the Beer Game is similar to the AMS diagram. Understanding the nature and source of delays in a systems—such as the supply line delay above — often plays a critical role in managing accumulations without overcorrecting.

Managing the “Beer Game” In a production-distribution system game more fondly known as the Beer Game, participants are given the task of managing their own inventory (accumulation) of beer. Each team is composed of four players linked together in a structure similar to that represented in the AMS diagram (see “Supply Line and Delay in the Beer Game”). Within that team, each participant must make ordering decisions in order to maintain his or her desired level of inventory.

According to MIT Professor John Sterman, when participants try to manage accumulations in the Beer Game they usually run into three common problems. First, they typically underestimate the true length of the delay from the time they order to when they receive the beer and then over adjust their orders — even when they are given full information about the supply line delays. They do not appear to recognize that their ordering decisions affect the length of the supply line delay — that is, the more they order, the longer it takes to receive the beer.

In addition, he found that when people fund it difficult to determine their optimal inventory level, they simply anchor their desired inventory on the initial inventory and adjust from there. This finding highlights the more general tendency people have to anchor on past goals or standards rather than search for better ones.

The third observation is that people generally point to factors outside the system as being responsible for the instabilities they observe in the game. That is, people offer open loop explanations rather than connecting the dynamics back to their own decision making. In fact, the wide oscillations in inventory are actually generated by the decisions they make.

Avoiding the “Pack Rat” Syndrome

If you want to avoid the “pack rat” syndrome, you need to manage the whole Accumulator Management Structure and not just focus on one piece of it. The observations about the difficulties of managing the Beer Game suggests that you should think through the following questions when confronting a typical accumulator management situation: (1) Where are the supply line delays and how are they changing? (2) What factors are determining what Desired Accumulation should be? (3) How do current policies and decisions feed back into this system to produce the results we have observed? The Accumulation Management Structure diagram is a useful starting point to begin addressing these questions.

Further Reading: “Modeling Managerial Behavior: Misperceptions of Feedback in a Dynamic Decision Making Experiment,” by John D. Sterman, Management Science, Vol. 35, No. 3, March 1989.

The post Accumulation Management: Avoiding the “Pack Rat” Syndrome appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/accumulation-management-avoiding-the-pack-rat-syndrome/feed/ 0
Not All Recessions Are Created Equal https://thesystemsthinker.com/not-all-recessions-are-created-equal/ https://thesystemsthinker.com/not-all-recessions-are-created-equal/#respond Tue, 23 Feb 2016 10:33:20 +0000 http://systemsthinker.wpengine.com/?p=4758 Last month, the National Bureau of Economic Research confirmed what most businesses have known for some time—the United States economy is in a recession. But debate continues over how severe the downturn will be, how long it will last, and what companies can do to survive in the tough economic climate. Some have argued that […]

The post Not All Recessions Are Created Equal appeared first on The Systems Thinker.

]]>
Last month, the National Bureau of Economic Research confirmed what most businesses have known for some time—the United States economy is in a recession. But debate continues over how severe the downturn will be, how long it will last, and what companies can do to survive in the tough economic climate. Some have argued that this recession, like most since World War II, will be short and mild, and the economy will bounce back quickly. Others have warned of impending doom that would rival, if not surpass, the Great Depression of the 1930s.

Which view should we follow as we head into the uncertainties of the 1990s? Can companies learn from the lessons of past recessions and position themselves to make the most of the current downturn? Or have we entered new, uncharted territory where well beaten paths lead to financial ruin? To answer these questions we need to look beyond current headlines about hank failures and volatile oil prices to the underlying forces that are shaping this recession — and the economic recovery that will follow.

Recessions and the Long Wave

Recessions are part of the business cycle, and the business cycle, according to Professor John Sterman of the MIT Sloan School of Management, is one of several behavior patterns which produce the economy’s overall behavior (see “Economic Patterns” graph). Although the business cycle receives the most attention, it is in many ways the least important mode of behavior. It has a fairly small amplitude and averages four years in the United States, so its effects arc reversed quickly. The long wave, in contrast, has a much greater amplitude and a longer time-frame, making its effects more significant but less obvious. Both the business cycle and the long wave occur in the context of an even longer process, the life cycle of economic development — a high rate of average economic growth which began with the industrial revolution and has continued through the last 200 years.

The interaction of these forces produces the varied behavior of the economy: During an upswing of the long wave, a recession or downturn of the business cycle is generally short and mild, expansions are longer and more robust., and the rate of economic growth is well above average. During a peak or downturn of the long wave, recessions are more severe and the average rate of economic growth is greatly reduced.

Economic-Patterns

Economic-Patterns

How the Long Wave Works

The long wave was first proposed in the early 1900s by a Russian economist named Nikolai Kondratief (see Global Citizen for a more complete history and description of the long wave theory). Since the 1970s, researchers at MIT have been using system dynamics to understand the structural forces that give rise to the long wave. The resulting computer model of the U.S. economy has correctly anticipated nearly all of the significant economic “surprises” that have occurred in the past two decades: deeper and deeper recessions, speculative “bubbles” in assets, persistent unemployment, and growing political conservatism, among others.

The long wave expansion is powered by many reinforcing processes that lead to high capacity utilization, low unemployment, rising real wages, low real interest rates, rising debt, increased investment, and rising optimism. Due to this growth, the economy slowly becomes overbuilt and imbalances appear. Excess capacity develops. Speculation replaces investment. Unemployment rises and real wages stagnate. Margins fall and price wars develop as firms battle for share in shrinking markets. Real interest rates soar with the decline in prices and inflation, and so do defaults.

These imbalances must be corrected. During past long wave downturns, the correction process resulted in depression, financial panic, deflation and default. Excess capacity, untenable debt, and inflated asset values were eventually eliminated, setting the stage for the next upward cycle. Although the MIT research on the long wave has focused on the U.S. economy, Sterman notes that similar dynamics appear in most industrialized economies.

The Dynamic Engines Driving The Long Wave

The Dynamic Engines Driving The Long Wave

Past long wave downturns in the U.S. occurred in the 1830s and 1840s, the 1870s through the late 1890s, and in the 1920s and 1930s. The current long wave peaked around 1979, and will probably bottom out at the low point of the current recession, Sterman believes.

No “Great Depression”

This doesn’t mean we are heading into another Great Depression. “Many people believe that if there is a long wave, that means we will have a repeat of the Great Depression of the 1930s,” notes Sterman, “That’s not true. Each long wave downturn has been different, and the Great Depression represents one extreme.”

However, Sterman warns that the current recession “is going to be deeper and longer-lasting than most people think.” In the past, recessions have resulted from the temporary overbuilding of inventories, which can be worked off quickly. This time, it is not product inventories which are excessive, but the supply of condos and retail space, physical capacity, financial services and the government sector, and commercial debt. “These imbalances aren’t going to go away quickly,” says Sterman. “It takes years, for example, to work off excess real estate.”

Sterman also points out that the long wave does not happen in isolation, outside of our control. “It results from the everyday decision making process and actions throughout business and government.” Although the time is past to prevent the long wave downturn, he believes we still have the ability to choose between an uncontrolled “implosion” of the economy versus a more contained, controlled decline. What’s needed, says Sterman, are lower interest rates, early debt writeoffs, and solid leadership to maintain consumer confidence.

The Economic Circus

So how long will the recession last? No one knows for sure. One of the reasons this downturn is particularly difficult to calculate is the sheer magnitude of the government’s role in regulating the economy. In all previous long wave downturns the government was a much smaller actor — “a mouse among the animals in the economic circus,” as Sterman describes it. In the 1920s, the U.S. government controlled only one-tenth of the gross national product. It had little influence in the events leading up to the Depression and grew significantly in response to the crisis.

Now, with the government controlling one-third of the economy, it has become the elephant in the center ring. Government actions designed to prolong the current expansion have worsened many of the imbalances built up during the upswing of the long wave. The pressures continue to accumulate, not dissipate. In addition, the government — bloated with debt and burdened with enormous deficits — is now less able to respond to any new crisis.

The war in the Persian Gulf is another unknown factor. Although Sterman says its tough to say how it will affect the economy, he doubts that the war will bring the economy out of a recession the way World War II stimulated the post-Depression economy. The reason, he explains, is the timing of both wars in relation to the long wave. “World War II occurred just as the long wave had bottomed out, so that the excess capital and labor force were able to be used to meet the new military demand.” The Persian Gulf war, by contrast, is occurring in the downswing of the long wave, “when there are tremendous contractionary forces pushing down government spending.”

A Silver Lining

But Sterman notes there is a silver lining to the current economic outlook. “Keep in mind that the long wave is a cycle, and it will turn around. In the boom of the 1980s, a lot of people thought things would continue going up forever. If people now make the same mistake of thinking the economy will go down forever, they might miss some real opportunities.

“When the current imbalances are resolved, the stage will be set for the next long wave expansion, creating tremendous opportunity. Growth will resume at moderate rates. Real interest rates will decline. Unemployment will fall, though the new jobs will often be in new industries and require new skills. Debt will be rebuilt around a new set of financial institutions, a new set of managers, and newly-stiffened credit standards. A new ensemble of technologies will emerge, powering the growth of new firms.”

The Long Wave: Changing the Rules of the Game

The downturn of the long wave, as Sterman describes it, is “a time of radical change.” The imbalances it generates spill out into the social and political realm, creating new threats and opportunities — in effect, changing the rules of the game. Here are a few trends to watch for:

  • Intense technological innovation. All economies are organized around a fairly small number of core technologies. Radical new technologies, although superior, often are incompatible with the existing infrastructure. The airplane, for example, was a superior method of transportation, but it did not fit into an economy that was organized around rail transportation, steam, and coal. A window of opportunity opened up only after the bankruptcy of the railroad industry during the Great Depression.

Our current infrastructure is centered around oil, internal combustion, the automobile and aircraft, synthetic (oil-based) materials, electricity, telephone, radio, and television. As the recession intensifies and our current technological base weakens, new technologies will vie for a place in the foundation of the next long wave upswing, says Sterman. “It’s seems clear that the new ensemble of technologies will rely heavily on computers, biotechnology and more environmentally sound modes of transport, energy production, and materials use.”

  • Regulatory backlash. Economic historians have identified three great merger waves: 1870-1902, the 1920s, and the 1980s. All three occurred during a long wave downturn. In the past, every merger wave has been followed by a regulatory backlash, resulting in measures such as the Sherman Antitrust Act of 1890, the Glass Steagall Act, and the creation of the Securities and Exchange Commission.The regulatory backlash in the current long wave is already building, says Sterman. Insider trading and securities law violations prosecuted by the government increased significantly in the 1980s, and the backlash will spread rapidly to other areas.

    Some industries which were deregulated during the downturn will be candidates for reregulation and new industries and technologies will be brought under regulation as the government seeks to redress past imbalances. Possible candidates: double hull requirements for oil tankers, limits on caller ID and other new information technologies, and tougher Food and Drug Administration approval processes.

  • Changing treatment of debt. The pressures of the long wave downturn change the relationships between borrower and lender. As prices fall and the impossibility of repayment becomes clear, the political pressure from massive foreclosures and defaults will lead to relaxation of bankruptcy laws, as it did during the depressions of 1837-43, 1873-78, and the 1890s. The S&L bailout is the largest bank holiday to date; it likely will not be the last.

 

And the great pendulum of the U.S. economy will once again begin its upward march.

Portions of this article were adapted from “A Long Wave Perspective on the Economy in the 1990? by John D. Sterman which appeared in the July 1990 issue of The Bank Credit Analyst. For more information on the long wave and the National Economic Model, contact John D. Sterman, 552-562, MIT Sloan School of Management, 50 Memorial Drive, Cam-bridge, MA 02139, (617) 253-1559.

The post Not All Recessions Are Created Equal appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/not-all-recessions-are-created-equal/feed/ 0
Managing Hospital Emergency Capacity https://thesystemsthinker.com/managing-hospital-emergency-capacity/ https://thesystemsthinker.com/managing-hospital-emergency-capacity/#respond Mon, 22 Feb 2016 19:08:16 +0000 http://systemsthinker.wpengine.com/?p=4761 It’s 11:30 on a Friday night at San Jose Medical Center. In the operating room are the victims of an auto accident — a woman, seven months pregnant, and her five-year-old son. In the emergency department, 14 patients and their families fill the treatment rooms and waiting areas. Three of them are critically ill. With […]

The post Managing Hospital Emergency Capacity appeared first on The Systems Thinker.

]]>
It’s 11:30 on a Friday night at San Jose Medical Center. In the operating room are the victims of an auto accident — a woman, seven months pregnant, and her five-year-old son. In the emergency department, 14 patients and their families fill the treatment rooms and waiting areas. Three of them are critically ill.

With the emergency staff and two surgical teams fully occupied, the supervisor is about to direct staff to temporarily divert new paramedic patient arrivals to other hospitals. Two of the three closest hospitals already are diverting patients due to overload. A few minutes later, paramedics radio the Medical Center. They just picked up a woman near one of the diverting hospitals. She is having a severe asthma attack. She needs attention quickly. Can the Medical Center take her?

Such scenes as the one described above were occurring with greater frequency at the San Jose Medical Center (SJMC) in the late 1980s. A hospital goes on temporary diversion status when it cannot safely accept more emergency patients because of insufficient staff, operating rooms, or beds. In San Jose, the proportion of time in which paramedic patients were diverted to other hospitals gradually increased from a monthly average of 5% in 1986 to 35% in 1990 (see “Paramedic Diversion Rate” graph). The increase in paramedic diversions was not confined to SJMC. Other hospitals were experiencing diversion rates ranging from 25-65%. The county-operated hospital, which had the heaviest emergency patient load, also had the highest diversion rate — more than 80% in early 1990.

highest diversion rate—more than 80% in early 1990

The growing diversion rates indicated a number of stresses in the San Jose community hospital system. Between 1986 and 1988, for example, the shift from scheduled admissions to emergency admissions grew at a six percent average annual rate. During the same period, demand for hospital critical care services increased by four percent per year.

While demand for these services was growing, actual hospital capacity for critical care services remained fixed or declined. The demand for emergency-origin patients had grown faster than capacity, and the community-wide emergency medical system was becoming dysfunctional.

Decreased Quality of Service

The community incurs tremendous costs when its emergency service system has such frequent closures. When paramedic diversion rates increase, service quality suffers because of the longer time delays as paramedics “circle” around looking for an open hospital. Not only is treatment delayed, but the paramedics are distracted from their patient care duties while they spend time on the radio trying to find an open hospital.

Since already overloaded hospitals are more likely to accept patients whose conditions arc less severe, medical conditions are more likely to be misrepresented in the field. And, as the paramedics’ frustration mounts, the number of patients brought in with no prior alert — and therefore no hospital preparation — increases. These unexpected arrivals only add to the overload and lengthen the already long wait.

Systems Thinking Approach

San Jose Medical Center was particularly concerned with the growing problem of paramedic diversions since it has a history of commitment to emergency medical service. SJMC operates the busiest of three designated trauma centers in the county, and its location near the center of the population and several freeways makes it a leading hospital for paramedic patients. But as the number of diverted patients from other zones had grown, SJMC’s capacity to treat patients from its own market was greatly reduced. The result was frequent patient backlogs at various points within the Medical Center.

To address this problem, our team of department directors and executives at SJMC applied a systems thinking approach to better understand and address the growing unavailability of Medical Center capacity for emergency patients.

The key questions we addressed were:

  • Where are the highest-leverage points for improving our capacity to serve emergency patients?
  • What will it take (resources, structural changes) to implement those strategies?
  • How much of the gap between current diversion rates (35%) and our short-term goal of 12% can be reduced by internal interventions?

(Editor’s note: Readers may want to try their hand at answering the above questions and perhaps identify analogous “emergency service” situations in their own organizations.)

To address the problem of capacity constraints and paramedic diversions, the SJMC project team began by gathering subjective data. Our goal was to collect team members’ “conventional wisdom” (mental models) about the emergency care system — its problems, the causes, and possible solutions. We then identified measurable outcomes SJMC wanted to attain, compared our actual performance with the desired performance, and calculated the gap we needed to close. To develop a preliminary understanding of the potential leverage points within the system, we developed a conceptual model which described how the system works in terms of typical operational situations.

Domino Effect

When we looked more closely at current operations of individual hospitals, we found that when the county-operated hospital diverted paramedic patients, a nearby community hospital would also go to diversion status to avoid becoming overloaded and receiving “undesirable” emergency patients. This behavior worsened the impact on other hospitals, particularly SJMC, which would then receive both hospitals’ overflow (see “Domino Effect of Paramedic Diversions” graph). The overflow would cause SJMC to go into diversion status more rapidly, causing diverted patients to flow to the next hospital, and so on. In effect, a primary cause of the overall paramedic diversion problem was the ripple effects of capacity limitations at one hospital.

Our initial project goal had been to reduce SJMC’s diversion rate, because diversions meant not meeting one of our most basic responsibilities — to provide emergency medical care to our community. But now that we understood the broader community-wide system, we realized that the more SJMC increased its capacity in order to prevent going to diversion status, the more patients we received from other hospital diversions. The result was an even greater strain on capacity, and more SJMC diversions.

Shifting the Burden Structure

As long as other hospitals were diverting a significant proportion of paramedic patients, stepping up capacity at SJMC would simply result in our seeing more and more diverted patients until we exhausted our added capacity (loop BI in “Shifting the Burden of Emergency Care”). In addition, our efforts to increase internal capacity were unintentionally alleviating pressure on the system to resolve the broader problem — an overall lack of emergency capacity at the community’s hospitals (loop B2).

Domino Effect of Paramedic Diversions

Domino Effect of Paramedic Diversions

When the county hospital and another area hospital go on diversion status, SJMC receives both hospitals’ overflow. As a result, SJMC is driven into diversion status more quickly, creating a “domino effect” of diversions throughout the community emergency service system (left).

We began to realize that the fundamental solution to the community-wide problem was a public policy intervention that would ensure that all hospitals maintain a generally “open” emergency medical service. That way, no one hospital or group of hospitals could disrupt the entire system.

Ironically, our previous capacity expansion efforts had masked the need for a public policy intervention (loop RI), because the burden of providing adequate emergency service capacity had been shifted to SJMC. We concluded that the single, highest-leverage solution to the problem was to implement public policy that would require all hospitals in the emergency medical care system to receive paramedic patients (except in rare instances).

Reframing the Problem

In light of the above insights, we revised our definition of the problem. It was now evident that external action was likely to have the greatest impact on SJMC’s diversion rate. Internally, however, several questions still remained: If the community-wide diversion rate were reduced to a reasonable level, would SJMC continue to have an emergency capacity problem? If new capacity were needed, which internal interventions will produce the greatest yield in terms of freeing up capacity and enabling more patients to be served? In order to address such questions, we developed a computer model.

Our initial work had provided a great deal of insight into the diversion/ capacity issue; the computer model now allowed us to leverage our understanding even further. If the initial phase took us from an understanding level of one to five, using the computer allowed us to leap to a level of nine. The model enabled us to test out specific policy recommendations, compare the results with other policies, and access which ones were more desirable — all without risking a single patient’s life or a physician’s career.

We tested alternative strategies for improving internal capacity such as adding staff, adding beds or operating rooms, altering protocol for paramedic diversions, improving system productivity, and moving patient bed locations. We then measured various outcomes: patient waiting times, number of patients treated, and financial result. Among the 35 suggestions that we originally collected from the project team, we were able to isolate two key leverage points: reducing treatment times in the critical care and telemetry units and improving the shared nursing arrangements among three departments to increase their ability to meet surges in demand.

Initial Outcomes

Shifting the Burden of Emergency Care

Shifting the Burden of Emergency Care

Our SJ MC team completed the systems thinking project in the summer of 1990. Since then, on the joint initiative of the Hospital Conference of Santa Clara County (which included SJMC representatives), the County , Medical Society, and the County Emergency Medical Services Agency, the San Jose community implemented a public policy that maintains an “open” emergency status at all of the community’s hospitals. Although it is still too early to evaluate the long-term results, the diversion rate virtually disappeared in the initial months. In light of the systems “rule of thumb” that quick fixes do not produce lasting results, we expect diversion problems may creep back up during the coming months. Over time, we may need to refine the public policy approach so that long-term adjustments are made and the system is restored to balance. In addition to working on the community-wide capacity problems, at SJMC we are currently using the insights gained from the computer model to develop strategies for improving our internal capacity.

Bette Gardner is a healthcare management consultant in Morgan Hill, California. She is applying systems thinking in her work at San Jose Medical Center and elsewhere.

The post Managing Hospital Emergency Capacity appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/managing-hospital-emergency-capacity/feed/ 0
New England Fishing Industry: Who’s Minding the Fish? https://thesystemsthinker.com/new-england-fishing-industry-whos-minding-the-fish/ https://thesystemsthinker.com/new-england-fishing-industry-whos-minding-the-fish/#respond Mon, 22 Feb 2016 18:40:29 +0000 http://systemsthinker.wpengine.com/?p=4794 New England fishermen are in trouble, victims of a “get-it-while-you-can mentality” that could exhaust fish stocks beyond recovery and threaten the existence of one of America’s oldest industries, declared a recent Wall Street Journal article. (“Dead in the Water: Overfishing Threatens to Wipe Out Species and Crush Industry,” July 16, 1991). Frustrated fishermen have watched […]

The post New England Fishing Industry: Who’s Minding the Fish? appeared first on The Systems Thinker.

]]>
New England fishermen are in trouble, victims of a “get-it-while-you-can mentality” that could exhaust fish stocks beyond recovery and threaten the existence of one of America’s oldest industries, declared a recent Wall Street Journal article. (“Dead in the Water: Overfishing Threatens to Wipe Out Species and Crush Industry,” July 16, 1991). Frustrated fishermen have watched their daily catch dwindle over the years, while the number of foreclosure auctions on boats continues to climb.

The current scenario is what U.S. government officials thought they were preventing in 1977 when they banned foreign trawlers from a 200-mile boundary around U.S. shores. The action was intended to reserve fish stocks for local fisherman. But soon local boats took the place of foreign vessels. The New England ground fishing fleet grew from 590 vessels in 1976 to more than 1,000 in the early 1980s. New, more expensive trawlers with advanced fish-fording electronics were built, making it easier to track down good fishing beds and navigate tricky waters near rocks or shipwrecks.

New England Fish Catches

New England Fish Catches

In 1982, under intense pressure by fishermen, the New England Fishery Management Council lifted catch quotas and trip limits. With unlimited water usage and advanced tools, fishermen saw dramatic increases in their catches. But trawler catches peaked in 1983 at 415 million pounds, then fell off sharply (see “New England Fish Catches”). Now fishermen are scrambling to bring in catches that are smaller than any in recent memory.

Not only are the numbers of fish in each catch declining, but so is the size of each fish. Smaller fish are less likely to have spawned, thus jeopardizing future fish stocks (see “Limits to Fish Catches”). Flounder and haddock are already near record lows. The cod count is down, and bluefin tuna and swordfish have been depleted. The problem is not restricted to New England. Red snapper is declining in the Gulf of Mexico, swordfish are down in the Atlantic, and salmon are under pressure in the Pacific.

Some fishermen deny that overfishing is to blame, arguing that natural cycles are at work and the stocks will rebound in time. Others contend that pollution is the culprit, although that does not explain why unpopular fish like skate and dogfish arc flourishing.

Limits to Fish Catches

Limits to Fish Catches

One thing is certain — fishermen, who face large mortgages on their expensive boats, feel intense pressure to fish as aggressively as they can. Many now fish longer hours in order to make up for smaller catches. Some scallop fishermen in New Bedford, for example, have begun working almost round the clock. The combination of high-tech equipment and longer working hours means “fish have no sanctuary of space or time.”

Current efforts to maintain fish stocks include setting minimum net mesh sizes and occasionally closing overfished waters. But some fishermen and conservationists are demanding stricter limits: catch quotas, trip limits, and even a moratorium on new boats. Legislation is pending that proposes reducing the fleet size by buying out boats with money from a tax on diesel fuel fishing boats use.

Discussion Questions

  • How does the “Limits to Success” structure shown above play into a “Tragedy of the Commons” structure?
  • What are the incentives for the fishermen to keep fishing and who controls those incentives?
  • Based on your understanding of the “Tragedy of the Commons” archetype, what “solutions” would you recommend?
  • Are there similar “Tragedy of the Commons” structures playing out in your organization or industry?

Although the “Limits to Fish Catches” diagram on page 7 captures the main structural elements that are responsible for the collapse of fish stocks, it lacks some key distinctions which are critical for understanding the crisis. The “Tragedy of the Commons” archetype shows how each fisherman’s actions are contributing to the declining catches (see “Tragedy of the Fishing Grounds”).

The paradoxical logic of this archetype says that each person pursuing his best interest will collectively create a future state that nobody wants. For example, if fisherman A increases his fishing trips he will increase his catch as well as his revenues (R2). The more trips he makes, the more money he makes. The same goes for fisherman B (R3). But as hundreds and hundreds of fishing boats enter the waters, the number of fish available rapidly declines, leading to smaller catches and lower revenues for everyone (B2 & B3).

Tragedy of the Fishing Grounds

Tragedy of the Fishing Grounds

The number of trips each fisherman makes is determined by his individual revenue goals. If there were other employment alternatives, the decline in revenues might actually lead to fewer fishing trips as fishermen looked for other forms of employment. For many of them, however, fishing is the only livelihood they know. Family tradition and lack of other job skills suggest that they will continue fishing. Thus as revenues sink lower, the financial pressure mounts, spurring them to increase their fishing trips. New boat and electronic equipment purchases that were made in the early 1980s only exacerbate the financial pressure (see “Incentives and Pressures”).

As fisherman B goes on more fishing trips, the amount of fish he catches will increase, thus raising the total number of fish caught. Since the number of available fish has decreased, the next time out B will catch fewer fish, resulting in lower revenues and increased financial pressure. Fisherman B will feel he has no choice but to go on even more fishing trips. Multiply that scenario by a 1000 and you have the current situation in New England — each player is being driven to fish harder, the stocks of fish are being depleted faster, and the size of the catch is growing smaller.

Altruistic Actions

Incentives and Pressures

Incentives and Pressures

Suppose in order to rectify this situation, I as an individual fisherman decide to cut back on the number of trips and dip into my savings to survive until the fish stocks arc replenished. How long would I have to wait? Probably a lot longer than my bank account could hold out. The reason is that the solution to a tragedy of the commons structure never lies at the individual level. My individual actions to stop fishing will simply leave my usual daily quota of fish for some other fishermen to catch.

To be effective, the solution must affect the actions of all the fishermen by linking more directly the long term losses with the current gains. Each of the proposed solutions — catch quotas, trip limits, a moratorium on new boats —  shows promise, but to be truly effective they will need to be used in combination. For example, instituting a moratorium on new boats without limiting catch size or number of trips would work only if the current fleet’s capacity to fish was less than the fish’s regeneration rate, which does not seem likely according to the data. Enforcing catch quotas and trip limits without providing ways to compensate for the accompanying financial hardships will be difficult. Faced with the prospect of financial ruin, most fishermen are likely to continue fishing as hard as ever.

The plan to buy back boats using funds obtained through a diesel tax shows promise. It is the equivalent of a user fee that taxes individuals proportionate to their usage of a resource or facility, similar to toll roads or gasoline taxes that compensate for the wear and tear on roadways. From a systems perspective it has several advantages: it links the individual’s actions of today with the longer implications of those actions, and it helps distribute the financial burden over the entire fishing community.

There still are problems with this solution, however. As always, the use of a tax is unpopular with voters who are affected by it, making it politically difficult to follow through with such legislation. Even if the tax went through, it is unlikely that the revenues would be large enough to pull a significant number of boats out of circulation (the market value of all the boats is in the hundreds of millions of dollars). A matching fund of some kind may be required (a haddock tax, perhaps?) to supplement the diesel tax. Again, without accompanying measures like quotas and trip limits, the tax may only serve to increase the financial pressure, forcing everyone to fish even harder.

Adam Smith’s Invisible Hand

Our free market economy is based on Adam Smith’s theory, which argues that if everyone pursues his or her self-interest, the free market system will produce the best outcome for society as a whole. But does this theory still hold true? If the “Tragedy of the Commons” structure tells us nothing else, it says that the invisible hand theory is seriously flawed. The crisis in the fishing industry and many other “Tragedy of the Commons” structures testify to the fact that the “invisible hand” works only when individual actions are pursued in an environment that is virtually limitless relative to the impact of the collective actions. In Adam Smith’s time, that may have been true. But with the world rapidly becoming a tightly-interconnected “global village,” it may be that the invisible hand is poised to give us a slap on the face.

The post New England Fishing Industry: Who’s Minding the Fish? appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/new-england-fishing-industry-whos-minding-the-fish/feed/ 0
Tragedy of the Commons: All for One and None for All https://thesystemsthinker.com/tragedy-of-the-commons-all-for-one-and-none-for-all/ https://thesystemsthinker.com/tragedy-of-the-commons-all-for-one-and-none-for-all/#respond Mon, 22 Feb 2016 18:31:59 +0000 http://systemsthinker.wpengine.com/?p=4792 In this issue we return to our coverage of systems archetypes — dynamic structures that are found repeatedly in diverse settings. In future issues, we will alternate between archetypes and other tools in the systems thinker’s toolbox. Do you recall any hot summer days when you and your family decided to spend a relaxing day […]

The post Tragedy of the Commons: All for One and None for All appeared first on The Systems Thinker.

]]>
In this issue we return to our coverage of systems archetypes — dynamic structures that are found repeatedly in diverse settings. In future issues, we will alternate between archetypes and other tools in the systems thinker’s toolbox.

Do you recall any hot summer days when you and your family decided to spend a relaxing day at the local swimming pool? You loaded up the car and arrived at the pool only to discover that every other family had the same idea. So instead of the relaxing outing each family anticipated, everyone ended up spending a nerve-wracking day dodging running children and trying to cool off in a pool filled with wall-to-wall people. In many similar situations, people hoping to maximize individual gain end up diminishing the benefits for everyone involved. What was a great idea for each person or family becomes a collective nightmare for them all.

Tragedy of the Commons Template

Tragedy of the Commons Template

In a “Tragedy of the Commons” structure, each person pursues actions which are individually beneficial (R), but eventually result in a worse situation for everyone (B).

Individual Gain, Collective Pain

At the heart of the “Tragedy of the Commons” structure lies a set of reinforcing actions that make sense for each individual player to pursue (see “Tragedy of the Commons Template”). As each person continues his individual action, he gains some benefit. For example, each family heading to the pool will enjoy cooling off in the swimming area. If the activity involves a small number of people relative to the amount of “commons” (or pool space) available, each individual will continue to garner some benefit. However, if the amount of activity grows too large for the system to support, the commons becomes overloaded and everyone experiences diminishing benefits.

Traffic jams in L.A. are a classic example of how a “public” good gets overused and lessened in value for everyone. Each individual wishing to get quickly to work and back uses the freeway because it is the most direct route. In the beginning, each additional person on the highway does not slow down traffic because there is enough “slack” in the system to absorb the extra users. At some critical level, however, each additional driver brings about a decrease in the average speed. Eventually, there are so many drivers that traffic crawls at a snail’s pace. Each person seeking to minimize driving time has in fact conspired to guarantee a long drive for everyone.

This structure also occurs in corporate settings all too frequently. A company with a centralized salesforce, for example, will suffer from the “Tragedy of the Commons” archetype as each autonomous division requests that more and more efforts be expended on its behalf. The division A people know that if they request “high priority” from the central sales support they will get a speedy response, so they label more and more of their requests as high priority. Division B, C, D, and E all have the same idea. The net result is that the central sales staff grows increasingly burdened by all the field requests and the net gains for each division are greatly diminished. The same story can be told about centralized engineering, training, maintenance, etc. In each case, either an implicit or explicit limit is keeping the resource constrained at a specific level, or the resource cannot be added fast enough to keep up with the demands.

Brazil’s Inflation Game

When the shared commons is a small, localized resource, the consequences of a “Tragedy of the Commons” scenario are more easily contained. At a national level, however, the “Tragedy of the Commons” archetype can wreak havoc on whole economies. Take inflation in Brazil, for example. Their inflation was 367% in 1987, 933% in 1988, 1,764% in 1989, and 1,794% in 1990. With prices rising so rapidly, each seller expects inflation to continue, therefore seller B will raise his price to keep up with current inflation and hedge against future inflation (see “Brazil’s Inflation Tragedy”). With thousands of seller B’s doing the same thing, inflation in-creases and reinforces expectations of continued inflation, leading to another round of price increases (R1).

Inflation also leads to indexation of wages, which increases the cost of doing business. In response to rising business costs, Seller A raises her price, which fuels further inflation (R2). Since there are thousands of Seller A’s doing the same thing, their collective action creates runaway inflation. The underlying health of the economy steadily weakens as the government and businesses perpetuate endless cycles of deficit spending to keep up with escalating costs. Over time, everyone grows increasingly preoccupied with using price increases to make profits rather than investing in ways to be more productive. Eventually the economy may collapse due to high debts and loss of global competitiveness, resulting in dramatic price adjustments (B1 & B2).

Brazil's Inflation “Tragedy”

Brazil's Inflation

Common “Commons”

Perhaps the trickiest part of identifying a “Tragedy of the Commons” archetype at work is coming to some agreement on exactly what is the commons that is being overburdened. If no one sees how his or her individual action will eventually reduce everyone’s benefits, the level of debate is likely to revolve around why individual A should stop doing what he is doing and why individual B is entitled to do what she is doing. Debates at that level are rarely productive because effective solutions for a “Tragedy of the Commons” situation never lie at the individual level.

In the sales force situation, for example, as long as each division defines the commons to include only its performance, there is little motivation for anyone to address the real issue — that the collective, not individual, action of each division vying for more sales support is at the heart of the problem. Only when there is general agreement that managing the commons requires coordinating everyone’s actions can issues of resource allocation be settled equitably.

Managing the Commons

Identifying the commons is just the beginning. Other questions that help define the problem and identify effective actions include: What are the incentives for individuals to persist in their actions? Who, if anybody, controls the incentives? What is the time frame in which individuals reap the benefits of their actions? What is the time frame in which the collective actions result in losses for everyone? Can the long term collective loss be made more real, more present? What are the limits of the resource? Can it be replenished or replaced?

The leverage in dealing with a “Tragedy of the Commons” scenario involves reconciling short-term individual rewards with long-term cumulative consequences. Evaluating the current reward system may highlight ways in which incentives can be designed so that coordination among the various parties will be both in their individual interest as well as the collective interest of all involved. Since the time frame of the commons “collapse” is much longer than the time frame for individual gains, it is important that interventions are structured so that current actions will contribute to long-term solutions.

For further reading about this archetype, see The Fifth Discipline: The Art and Practice of the Learning Organization (Doubleday, 1990), by Peter M. Senge.

The post Tragedy of the Commons: All for One and None for All appeared first on The Systems Thinker.

]]>
https://thesystemsthinker.com/tragedy-of-the-commons-all-for-one-and-none-for-all/feed/ 0