A Musing: The Human Dilemma of Requisite Variety
- Anne Ross
- Jun 3
- 4 min read
Updated: Jun 4
How a certain cybernetics law sheds some light on a dilemma of our human ethical identity
cybernetics (n.)
the science of communication and control theory
that is concerned especially with the comparative study
of automatic control systems
(such as the nervous system and brain
and mechanical-electrical communications systems)
—Merriam-Webster Dictionary online
[emphasis added]
IN CYBERNETICS, the law of requisite variety posits that an element of a system which has the most flexibility (the largest set of possible actions and responses) will end up controlling the system—that is, will end up making final decisions on behalf of—and even about the nature of—the system.
This applies similarly to human systems.
Person 1 Plays Fair, Person 2 Does Not
A simple example is a two-person system where Person 1 plays fair and Person 2 does not. Person 2, who has the greater requisite variety—the wider scope of possible action available—will end up running the system.
This is because Person 1 has self-imposed any number of limits (ethical standards) on Person 1’s own behavior and Person 2 has not done likewise. [In the grand scheme of things, though, Person 1 has, in a creatorly sense, chosen those self-limitations.]
Control by Degree
It follows that the less self-limiting Person 2 is (i.e., lacking in ethical self-restraint), the more Person 2 will control the system. This “control” may be visible or invisible, active or dormant. Person 1 may or may not be aware of its impact and reach.
Conversely, the more self-limiting Person 1 is, the more is an unrestrained Person 2 left to control the system.
Person 1’s Dilemma
If we temporarily set aside Person 1’s “creator point of view” (in choosing an ethical canon—the Golden Rule, for example, which takes other people into account in terms of both “do unto others” and “don’t do unto others”), the human dilemma that Person 1 faces is (a) “How do I live with myself and my conscience in peace, enjoying life, liberty, and the pursuit of happiness, (b) while letting Person 2’s version of ‘life, liberty, and the pursuit of happiness’ (which includes not living by the Golden Rule) violate my own?”
For Person 1, living by the Golden Rule (for example) is tantamount to self-hobbling in order to achieve a better experiential quality of life, which over our human centuries has been well confirmed as the worthwhile tradeoff.

Person 1’s quality of life (as governed by a set of self-limiting principles) gives rise to a serious dilemma when that quality of life is sufficiently overtaken by Person 2’s quality-of-life choices (made without self-limits). At the point Person 2 ends up controlling the system, Person 1 loses quality of life (a double bind).
Illustrative Scenarios
A certain line of code—the programming instruction “Thou shalt not kill”—illustrates the above well.
In the below scenarios, Person 1 lives by the ethical standard “Thou shalt not kill”; Person 2 does not.
Scenario One
Person 2 kills Person 1.
Person 1 now has no requisite variety whatsoever and Person 2 controls the system.
Scenario Two
Person 2 kills a third person, which violates Person 1’s standard.
Because of Person 1’s own self-limitation, Person 1 cannot kill Person 2 to prevent Person 2 from continuing to kill.
Because this self-limitation presents a double bind for Person 1, Person 1 must find other solutions to the social problem Person 2 poses.
Among the possible solutions, the most extreme is the death penalty. By making society-at-large an acceptable agent of what Person 1 otherwise considers unethical behavior, using the death penalty lets Person 1 sidestep personal responsibility for killing Person 2.
Effectively, the death penalty enables Person 1 to eliminate Person 2 from the system indirectly. Person 2, being dead, now has no requisite variety within the system (except for where Person 2 may have left instructions that continue to operate within the system).
This allows Person 1 to continue to abide by the self-imposed and socially compacted “Thou shalt not kill” ethic, even if by letter and not in spirit.
Scenario Two Point One
Instead of killing Person 2 indirectly via a social entity as in Scenario Two, Person 2 indirectly incarcerates Person 2, which accomplishes the same goal—eliminating Person 2 from the system dynamic.
Countermeasures
One solution to the dilemma would be for Person 1 to modify Person 1’s own ethics so as to match Person 2’s level of requisite variety. However, this would lead to Person 1’s becoming someone Person 1 doesn’t want to be—a monster. And any control conflicts between Person 1 and Person 2 (and others in the system) would by default devolve to “an eye for an eye”-type justice, where no one ends up with clean hands or sees better.
Another solution is the assertion of that special kind of collective authority that derives from a variety of larger intelligence not available to single individuals, one that allows for common sense, wisdom, practicality, swift but unerring action, and our species sense of human continuity. We see it arise in juries of twelve and other such bodies tasked with responsibility for “the whole of us” as human beings. By extension, just like the miscellaneous drawer in most kitchens, certain protocols—such as the Tenth Amendment to the U.S. Constitution, which reserves to the People a set of “all other powers”—tools to assert that special kind of collective authority—can serve when no others appear to be available.
A Heartening Random Afterthought
It appears that context outcontextualizes even requisite variety.
The Golden Rule—a “law” that is merely the recognition of a state—one of universal equilibrium within the greatest possible context—will always prevail.
That is, the two-person system mentioned above ultimately resides within the larger context of a universal “prevailing state”—in which always inheres, as an absolute, the greatest possible requisite variety.
Comments