parttwo
A THEORY OF ORGANIZATION
I |
n the previous part we scrutinized several of the key building blocks of the committee government model, concentrating on the notions that committees are autonomous and distinctive. Autonomy can mean many things; we focused on the extent to which the committee personnel process can reasonably be described as autonomous, concluding that it cannot. Distinctiveness also can mean many things; we focused on the geographical and ideological representativeness of House committees, concluding that most committees are representative most of the time.
In this part we shift gears from an examination of empirical details to a broad theoretical question: Why and how might a group of legally equal and often contentious legislators nonetheless create and maintain parties? The answer that we give to this fundamental question is similar in essential respects to the “theory of the firm” developed in the industrial organization literature in the 1970s and 1980s. But one need not be familiar with this literature to follow the argument. The basic ideas – which are also available in the Hobbesian theory of the state, the theory of political entrepreneurship, and elsewhere – boil down to this: parties are invented, structured, and restructured in order to solve a variety of collective dilemmas that legislators face.1 These “collective dilemmas” – situations in which the rational but unorganized action of group members may lead to an outcome that all consider worse than outcomes attainable by organized action – are inherent in the drive to be reelected in a mass electorate and in the process of passing legislation by majority rule.
The primary method of “solving” the collective dilemmas that legislators face, we argue, is the creation of leadership posts that are both attractive
1 This statement may suggest that we emphasize the degree to which organizational design and reform are intentional. We certainly do emphasize this intentionality more than, say, Hayek (1960) does. But we do not mean to imply that those who attempt to design institutions are infallible or to preclude the kind of evolutionary perspective articulated in Alchian (1950).
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 1/14/2020 11:20 AM via SUNY BINGHAMTON 77
AN: 194274 ; Cox, Gary W., McCubbins, Mathew D..; Legislative Leviathan : Party Government in the House
Account: s8999661.main.ehost
and elective. In Chapter 4 we survey theories of organizational leadership, explaininghowitisthattheinstitutionofleadershippositionscanameliorate the dilemmas facing groups of workers, citizens, legislators, and so forth. In Chapter 5 we adapt the ideas sketched out in Chapter 4 to the specific case of elected legislators.
4
Institutions as Solutions to Collective Dilemmas
Starting with this and the next chapter, we begin to articulate a view of parties as legislative cartels. This metaphor seems apt to us in part because both cartels and parties – indeed organizations in general – face a variety of collective dilemmas that must be solved if the organization is to operate effectively. This chapter accordingly deals with the general topic of organizational design and structure. The next chapter then focuses more specifically on legislative parties.
Social scientists from a variety of disciplines study institutions such as legislatures, business firms, public and private bureaucracies, armies, and trade associations. This chapter reviews what we consider to be the most satisfyingandcomprehensivetheoryofinstitutionaloriginsanddesign:what we shall refer to as the neo-institutional or neocontractarian theory. This theory, exposited fully in no single source, appears in remarkably similar form in a variety of fields. It will be familiar to normative political theorists as a generalized version of the Hobbesian theory of the state, to positive politicaltheoristsasavariantontheideaofa“politicalentrepreneur,”andto industrialorganizationeconomistsasanelaborationontheAlchian/Demsetz theory of the firm. Our purpose here is to underscore the similarity of these various theories – all of which seek to explain institutional features in terms of the choices made by rational individuals facing collective dilemmas – and to examine the answers given to two key questions: How do institutions “solve” collective dilemmas? What are the costs entailed by institutional solutions?[1]
The structure of the chapter is as follows. The first section defines the notion of a collective dilemma more precisely by looking at two examples. The rest of the chapter then focuses specifically on the prisoner’s dilemma, by far the most studied of collective dilemmas, and on a particular “institutional” solution to it: central authority.[2] In Sections 2 and 3 we describe the basics of central authority and why it is resorted to in theory and practice. In Sections 4 and 5 we consider some of the costs of central authority and the possibility of doing without it.
A collective dilemma is a situation in which rational behavior on the part of individuals can lead to unanimously dispreferred outcomes.[3] More formally, collective dilemmas are situations that can be modeled by games possessing Pareto-inefficient Nash equilibria.[4] Two well-known game situations can illustrate what we mean.
Consider first some problems of standardization. Two railroads would both benefit if they used the same gauge of track, but each prefers the gauge that fits its own trains. Two grain merchants would both benefit if they used the sameunitofweighttomeasurethegrain,buteachpreferstheunitwithwhich he has experience. Two politicians would both benefit if they joined forces to promote the same bill, but both prefer their own versions of the bill.[5]In each of these situations, the two agents – call them A and B – have two basic strategies: they can either stick with their current standard or switch to the other player’s standard. In the case of the politicians, it may also be reasonable to offer a compromise of some sort, but we will not consider that possibility here.
Insomesituationsofstandardsetting,thebenefitsofcoordinatingarehigh enough so that both players would prefer to switch to the other’s standard rather than fail to coordinate on a standard at all. These situations pose the
following problem of interaction: if the other player is going to insist on his or her standard, then it is wise to switch to that standard; but, if he or she will soon give in, then one should insist on one’s own standard.
The standard-setting problem poses a collective dilemma because both players may rationally insist on their own standard, resulting in an outcome (no shared standard) that both consider inferior to some others that could have been attained (both adopting A’s standard or both adopting B’s standard). Plausible real-world examples of this inefficient outcome include the multiplicity of different railroad gauges in the nineteenth century, the widespread use of the QWERTY rather than the Dvorak keyboard on modern typewriters, and the difference between the U.S. and metric measuring systems.
If we turn to the more formal definition of a collective dilemma – a situation that can be modeled by a game possessing Pareto-inefficient Nash equilibria – then some care must be taken in the present example to choose the right game. A game is a formal representation of a strategic choice or a class of strategic choices. It consists of a set of players (in this case, A and B); a specification of the options or strategies of the players (A can either stick with his or her standard or switch to B’s, and similarly for B); a specification of what outcome results from each possible set of options chosen by the players (e.g., if A sticks and B switches, then coordination is achieved on A’s standard and B pays some costs in switching); and a specification of how players rank the various possible outcomes (A most prefers that they coordinate on A’s standard, next that they coordinate on B’s standard, next that neither switch, and last that both switch). A matrix representation of the standardization game – in which the value of agreeing on a standard is (arbitrarily) taken to be 5 and the cost of switching to be 2 – is given in Figure 4.1.[6]
Given a game, a Nash equilibrium for that game is a set of strategies, one for each player, such that no player can secure a more preferred outcome by unilaterally changing to some other strategy (all other players continuing to play their equilibrium strategies). In the preceding example, the pair of strategies (Switch, Stick) – where A’s strategy is listed first – is a Nash equilibrium because A prefers the equilibrium outcome (coordination on B’s standard, with some costs of switching for him) to the outcome she could get by changing her strategy to Stick (failure to coordinate on either strategy, with no costs of switching). This equilibrium outcome is also Pareto efficient,
Player B
Stick Switch
0, 0 | 5, 3 |
3, 5 | −2, −2 |
Stick
Player A
Switch
figure 4.1. A Standardization Game.
meaning that no other outcome exists that both players prefer to it. A little thought will reveal that the only other equilibrium is (Stick, Switch) and that this too is efficient.
Itmightappearthat,becausethegamewehavedescribedhasnoinefficient equilibria, there is by our definition no collective dilemma. There is, however, another game that models the same situation a bit more accurately and that does possess inefficient equilibria. This game consists simply of repeated plays of the game sketched out earlier. Intuitively, such a game introduces the element of time into the situation facing A and B; if they fail to coordinate in the first time period, they have another chance. This opens up the possibility that each will try to outlast or outbluff the other. Farrell (1987) has shown that this game does have inefficient equilibria simply because each player tries to get the other to switch before giving in. Thus, in equilibrium, we expect a few rounds of bluffing in which neither party gives in and both forgo the benefits of coordination. This problem might be exacerbated if, as time went by, each player invested more and more heavily in his or her particular standard because there might come a time when standardization would no longer be worth it to either (the costs of switching having become too high).
In standardization or coordination games the problem is one of strategic uncertainty: will the players settle on one of the multiple Nash equilibria in the game (and if so, which one), or will they fail to coordinate entirely? Another collective dilemma is the famed prisoner’s dilemma. A two-person version of this game, illustrated in Figure 4.2, models the situation facing candidates for elective office before effective bribery laws were enacted (e.g., in England before 1883). Each candidate can either bribe some voters or not. Bribery, of course, is costly but – depending on what the other candidate does – it can secure an electoral advantage. Specifically, if both (or neither)
Player B
Bribe Do Not
−1, −1 | 1, −2 |
−2, 1 | −0, 0 |
Bribe
Player A
Do Not
figure 4.2. A Prisoner’s Dilemma.
bribe, then neither gains an advantage; but if one bribes and the other does not, then the briber is advantaged. If one assumes that the advantage gained by bribing exceeds the cost, then the situation facing the candidates is a prisoner’s dilemma. In Figure 4.2, we assume that the utility of the electoral advantage is 2 for both candidates, with a symmetric utility loss of 2 for the disadvantaged candidate; and that the cost of bribery in utils is 1 (this cost includes the monetary expenditure on bribery; the fines to be paid if caught, discounted by the probability of being caught; and any moral repugnance the candidate may feel). As can be seen, both candidates have a dominant strategy to bribe. That is, their best strategy is to bribe regardless of what they think their opponent will do. Thus, the strategy pair (Bribe, Bribe) is a Nash equilibrium. The collective dilemma comes in that this equilibrium is inefficient: both candidates could be made better off if neither bribed. Unfortunately, neither can trust the other not to bribe and so both incur the costs of bribery without realizing any benefits.[7]
Another prisoner’s dilemma (actually, a close cousin) can be illustrated by the difficulties facing teams of laborers. In prerevolutionary China, large gangs of men would tug fair-sized boats up the Yangtze. The problem was that each man was tempted to slack off a bit. After all, if enough others were pulling, the boat would still progress; if too few others were pulling, it did not matter how hard one pulled anyway.[8] This situation is a collective dilemma because it is a Nash equilibrium for no one to pull at all (if no one else pulls, then the efforts of just one person are futile); yet this equilibrium is inefficient because everyone prefers the outcome in which everyone both pulls and gets paid.
Intheremainderofthischapter,wediscusssomesolutionstotheprisoner’s dilemma. We shall focus in particular on central authority. In many cases, of course, action can be taken to create prisoner’s dilemmas. Examples include the District Attorney’s separation of the suspects in the original prisoner’s dilemma, antitrust laws, and open shop laws. We are not concerned with institutions or rules that create dilemmas, however, only with those that solve them.
This section focuses on what we call “central authority.” The gist of this notion can be suggested by recalling the case of the previously discussed Chinese river boat pullers. Cheung (1983, 8) has noted that the problem of loafingwassoseverethatworkers“actuallyagreedtothehiringof[someone] to whip them,” thereby ensuring that everyone both pulled and got paid. This simple idea underpins a wide array of institutional theories, including Frohlich and Oppenheimer’s (1978) theory of political entrepreneurship (see also Olson 1965; Salisbury 1969; Frohlich, Oppenheimer, and Young 1971), the Alchian and Demsetz (1972) theory of the firm, and Hobbes’s theory of the state (see Gauthier 1969; Kavka 1987).
The theory of political entrepreneurship suggests that n-prisoner’s dilemmas can be solved by “political entrepreneurs.” Political entrepreneurs have three essential features: (1) they bear the direct costs of monitoring the community faced with the collective dilemma, (2) they possess selective incentives (individually targetable punishments and rewards) with which to reward those whom they find cooperating or punish those whom they find “defecting,” and (3) they are paid, in various ways, for the valuable service they provide.
An example of political entrepreneurship is Dan Rostenkowski’s handling of what became the 1986 Tax Reform Act. The Democratic leadership was anxious to avoid the impression that the Democratic-controlled House of Representatives had killed tax reform. Rostenkowski, as chairman of the tax-writing Ways and Means Committee, was faced with the formidable task of ensuring that the members of his committee did not cave in to special and constituency interests clamoring for the preservation of tax loopholes (the collective dilemma arose because each committee member wished to preserve loopholes benefiting his constituency, but if enough loopholes were preserved, the Democrats could be accused of gutting reform). The selective incentives Rostenkowski had available to reward cooperative members included a variety of legislative favors at his disposal as chairman, the most obvious of which were the so-called transition rules – special dispensations publicly justified on the grounds that they allowed a smooth transition from the old to the new tax rules, politically justified on the grounds that they enabled key supporters to deliver benefits to important constituents. Rostenkowski made liberal use of the transition rules and ultimately was successful in reporting out a bill that made substantial reforms in the tax system (at the same time clearly benefiting traditional Democratic constituenciesmorethanRepublicanconstituencies).Thechairmanwaspaid for his troubles with a share of the transition-rule largesse, continuance in office, and (perhaps) an increased chance that he would someday become Speaker.
The moral of this story is that a long-standing feature of the institutional structure of the House – to win the office of chair of the Ways and Means Committee – facilitated the successful handling of a potential dilemma for the Democratic Party. The basic reason (upon which we expand greatly in the next chapter) is that the position is both powerful and essentially elective, so that its occupant has both the wherewithal and the incentive to ameliorate collective dilemmas.
The preceding view of political institutions has a direct analog in the theory ofthefirmadvancedbyAlchianandDemsetz(1972).Businessentrepreneurs, in their view, have three distinguishing features. First, they are specialists in monitoring, whose function it is to prevent shirking by workers engaged in team production. Second, they alone have the right to hire and fire individual workers and to negotiate pay. Third, they have a residual claim to all profits produced by the enterprise. Each of these features is directly analogous to the features of political entrepreneurship identified earlier. First, Alchian and Demsetz define team production in such a way as to make the problem of shirking essentially an n-prisoners’ dilemma, very similar to the problem facingtheChineseriverboatpullers.Thus,thefunctionofmonitoringservesthe same purpose in the Alchian/Demsetz firm as in the Frohlich/Oppenheimer political organization. Second, the right to hire, fire, and negotiate pay gives the business entrepreneur some particularly potent selective incentives with which to reward cooperative and punish noncooperative behavior. Third, business entrepreneurs’ residual claims compensate them for their services. Alchian and Demsetz emphasize the inadequacy of paying monitors a flat salary, for they then have no economic incentive other than the fear of dismissal to perform their duties. If entrepreneurs have a claim to all profits in excess of the sum needed to pay workers’ wages, however, they are motivated to promote efficient collective action in order to maximize output, and thus profits.[9]
A final example of central authority as a solution to a prisoner’s dilemma is the Hobbesian state. The “war of all against all” can be viewed as the collectively irrational outcome of an n-prisoner’s dilemma (Gauthier 1969; Kavka 1987; Taylor 1987). Hobbes’s suggested solution is the institution of an absolute sovereign – an individual or assembly with unlimited authority to act for all members of the polity and with accompanying unlimited lawmaking and enforcement. The monarch monitors and punishes unlawful and aggressive behavior (Kavka 1987, Sections 4.4, 6.1). He is able to do this because of the vast power of his office. He is motivated to do this by the fees collected in his courts, the taxes collected by his officials, and by other devices that provide kings with a personal interest in promoting the peace and prosperity of their kingdoms (Hirschman 1977).
All the institutional theories surveyed here involve a central agent – whether political entrepreneur, businessperson, or monarch – with three common features: (1) the agent bears the direct costs of monitoring the population faced with the collective dilemma; (2) the agent possesses, by virtue of his or her institutional position, selective incentives with which to punish noncooperative and reward cooperative behavior; and (3) the agent is motivated to bear the costs of monitoring and to expend scarce resources on selective incentives in punishing and rewarding those he or she monitors, either by receiving a substantial share of the collective output or by receiving a claim to the residual of collective output above some preassigned level or by some other compensation scheme designed to align the personal interests of the agentwiththelevelofcollectiveoutput.Theessentialpurposeofestablishing a central authority is to create an institutional position whose occupant has a personal incentive to ensure that the collective dilemma is overcome. In Olson’s terms, one can think of central authority as an institutional means of transforming latent groups into privileged ones (Olson 1965).[10]
It might be noted that central agents are not always confined in the literature to the role of supervisors, as they have been in the preceding discussion. In some versions of the theory of the firm, for example, corporate management is viewed as an arbitrator of intrafirm disputes. The gist of this view is that (1) many important transactions carried out within corporations are difficult to fully specify in advance; (2) this may lead to costly disputes when unforeseen contingencies arise; and (3) the CEO of the corporation thus has an interest in providing cheap, knowledgeable, and rapid “justice” when disputes arise. Part of the reason the corporation exists, then, is that the “legal system” provided within the firm by management is more flexible, cheap, and fair than the state-provided legal system to which the divisions of the corporation would have to appeal were all their transactions conducted in the open market.
This notion of central agent as provider of justice is of course a familiar one in the history of the state. The economic value of establishing a reliable system of property and justice, even in a local area, is evident throughout history. From a contractarian perspective, it is one of the clearest reasons to have a state.
The central agent as provider of justice is also visible within parties. Mayhew’s (1966) work on party loyalty among Congresspersons provides a number of examples of the leadership helping to hold together Democratic logrollsonthefloor.Wepursuethisideaatgreaterlengthlaterinthischapter.
This section discusses why purely voluntary agreements cannot always be relied on to solve organizational dilemmas. We focus on economic organization, contrasting the fortunes of workers who organize into an Alchian/ Demsetz firm with those of workers who remain unorganized (leaving all their transactions to “the market”).[11] The focus on economic rather than political organization is chosen for a variety of reasons: political scientists generally are less familiar with this literature and may profit from exposure to it; the theory of economic organization is more fully and formally developed than the corresponding political theory; the principles of economic and political organization are fundamentally similar – this despite the fairly obvious initial differences (e.g., economic organizations produce private goods almostexclusively,whilepoliticalorganizationsusuallyproduceamixtureof both private and public goods). Political organization will, of course, come in for the bulk of our attention in the remaining sections and chapters.
But for now we shall concentrate on the organization of production. Consider a group of n workers producing for sale in the marketplace. Each worker i chooses an action, ai, from a set of available actions, Ai = [0, ∞). We shall interpret the action ai = 0 as “exert no effort” or “do nothing” and adopt the convention that action ai requires more effort than action bi if and only if ai > bi. Effort is assumed to be costly, so that the ith worker bears a cost – vi(ai) – for taking action ai, where vi is strictly increasing and such that vi(0) = 0. Given a vector of actions, a = (a1,…, an), one for each of the n workers, a total output, y(a), is determined. For simplicity, we shall assume that the price of the output y is $1, so that the total revenue produced is simply y(a). Can a group of n unorganized workers agree on a method of sharing this revenue such that all workers are properly motivated to work?
This question has been posed, in a precise fashion, by Holmstr who makes three assumptions. (1) Unobservability: The particular action taken by each worker is unobservable so that the share each receives can depend only on total revenue. He denotes i’s share as si(y). (2) Budget Balancing:Regardlessoftheleveloftotalrevenue,thesharesofthenworkers add up to one (Rsi(y) = 1 for all y). (3) Concavity of Production: The function y is strictly increasing, concave, and differentiable with y(0,…, 0) = 0 (no effort, no output). Under these conditions Holmstrom shows that no n-tuple of actions exists that is both a Nash equilibrium and Pareto efficient. Put another way, he shows that any Nash equilibrium must be Pareto inefficient so that any group facing the three conditions of unobservability, budget balancing, and concavity of production must be mired in what we have called a collective dilemma.
Thereasonforthiscanbeseeninthecontextofasimpleexample.Suppose that si(y) = y/n for all i (i.e., each worker gets one nth of the total revenue). Each will choose an action ai in order to maximize the difference between his or her share of total revenue (y(a)/n) and the cost of his or her action (vi(ai)). Denoting the partial derivative of y with respect to ai by yi’ implies that yi’/n = vi’. That is, the worker continues increasing his or her level of effort until the marginal cost of this effort equals one nth of his marginal contribution to total output. But Pareto efficiency requires that each worker equate marginal cost to his or her full marginal contribution to total output.
It is important to note some limitations on Holmstrom’s result. Miller (1987,28)interpretsitasshowingthat“withanybudget-balancingincentive scheme…there will be a tension between individual self-interest and group efficiency – exactly the tension described by the prisoner’s dilemma.” But schemes that satisfy the budget-balancing and unobservability assumptions and that support efficient Nash equilibria do exist.
As an example, suppose that n = 3, si(y) = y/3 for all i, vi(ai) = ai for all i, and y(a)=0 unless a1, a2, and a3 are all at least one, in which case y(a) = 99. Note that the specification of y violates the concavity assumption. It says that all three workers must exert a particular minimum level of effort; otherwise,nosalableoutputwillbeproduced.Alltheothermajorconditions of Holmstrom’s model are satisfied. In this example, however, there exists a Nash equilibrium that is Pareto efficient. Pareto efficiency requires that all three workers choose action ai = 1, yielding a payoff to each of 99/3 − 1 = 32. But no worker has an incentive unilaterally to depart from this triple of actions. On the one hand, if i lowers his or her level of effort below 1, the share of output drops 33 (to zero) while he or she saves at most $1 in costs. On the other hand, if i raises his or her level of effort, no more output is produced but additional costs are incurred.
The message of this example is that unorganized workers with highly complementary skills may achieve efficient equilibria via simple share-ofoutput agreements. What is required is that the value of total output be quite low until all workers perform their tasks at an acceptable level. In other and more evocative terms, each worker’s contribution must be like a link in a chain, not like a drop in the bucket.[12] An economic example approximating such a production function might be coauthorship when neither coauthor has the other’s expertise. A political example sometimes occurs in voting: against a solid minority of 49, a majority of 50 can produce “victory” (and, with it, spoils) only if all members of the majority do their part and vote.
Despite this caveat regarding extreme complementarities in production, however, Holmstrom’s result does show that, for a wide class of situations, when actions are unobservable and budgets are balanced, inefficient equilibria are unavoidable. This suggests that a group of workers in an industry where workers cannot monitor one another are inevitably faced with a collective dilemma.
Holmstrom suggests that the way around this problem is to relax the bal- anced budget assumption. In other words, let the workers share the output in some budget-balancing way if output attains the efficient level; otherwise, give all the output away to some third party or destroy it. This scheme does, in principle, allow for efficient equilibria. For, supposing that everyone is currently working at efficient levels, each is faced with a choice between shirking (which saves some effort but costs the entire output) and not shirking (which requires effort but is remunerated with some share of the total output). As long as each worker’s share of the total output exceeds the total cost of his or her effort, a condition that is satisfied ex hypothesi, each worker will work.[13] Holmstrom’stechniqueworksforbasicallythesamereasonthateffi- cient equilibria can be attained with extreme production complementarities. Indeed, Holmstrom can be interpreted as using sharing rules, together with detailed knowledge of the production function y, to create the same interdependencies among workers that were posited as a feature of production technology alone in the preceding example. However, just as such production technologies are the exception rather than the rule, so too does it appear that Holmstromian employment contracts are exceptional.
The rule in employment contracts is based on a violation of the unobservability assumption. Business entrepreneurs expend resources in order to monitor the actions of employees and base their pay chiefly on their observed actionsratherthanontotaloutput.ThiscorrespondstotheAlchian/Demsetz model of the firm, or what we have referred to generally as central authority. Central authority can ameliorate the collective dilemma facing workers (in the sense of affecting a Pareto improvement) because workers can be effectively motivated to work by the system of monitoring and sanctions that the central agent implements. Monitoring of employee actions need not be perfect in order to achieve a Pareto improvement. Holmstrom has shown that partial information about the actions of workers is always valuable in the sense that were such information available at no cost, and workers’ pay based in part on it, then all workers could in principle be made better off. The reason for this is simply that the workers’ incentives actually to work are greatly improved when their pay is based to some extent directly on their level of work. Output thus can go up rather dramatically when pay is based on direct information about effort (how much it increases depending on the quality of the information), and in principle everyone can share in the profits from increased production. In practice, of course, information is costly. Because of its value in stimulating effort, however, it may be worth a substantial price. It may even be worth the cost of hiring an (n + 1)st worker whose only job is to monitor and sanction the original workers, as in the example of the Chinese river boat pullers.
The possibility that monitoring can so improve incentives to work that Pareto improvements result, even after the cost of monitoring is taken into account, is one of the central insights of the Alchian/Demsetz model of the firm. Nonetheless, it should be noted that full or “first-best” efficiency is never attained in Alchian/Demsetz firms. The first-best solution is for every worker to perform the efficient action, ai∗, and for no resources to be expended on monitoring. The Alchian/Demsetz firm mitigates the incentive problem but does so at the cost of expending real resources in monitoring – an otherwise useless endeavor. A lower bound on the amount by which such firms fall short of first-best efficiency is simply the amount of resources devoted to monitoring (it is a lower bound because workers may shirk even with monitoring – albeit less than without it).
If infinitely high penalties can be imposed on workers caught shirking, then the amount of actual monitoring can be reduced to near zero while still providing workers with sufficient incentives to work at the efficient level. Such a scheme could approximate first-best efficiency as closely as desired (Mirrlees 1976). But, although a “Pascal’s wager” solution might work if managers could rent fire and brimstone, bankruptcy laws and other legal devices seem to prevent the infliction of some punishments utilized in hell.
To summarize the discussion so far, it is difficult to achieve first-best efficiency in group production of private goods. If there are extreme complementarities in production, or if these complementarities are mimicked by the employment contract as suggested by Holmstrom, then full efficiency can be attainedinequilibriumbyasimpleshare-of-outputcontractwithnoneedfor the organization brought by a central agent. But the typical real-world case involves neither extreme complementarities nor extreme contracts, and in this case workers are insufficiently motivated to work if they merely receive a share of total output. This insufficiency of motivation prompts the development of firms in which certain agents monitor and sanction the actions of others. This monitoring-cum-sanctions gives workers an incentive to work, and output can increase enough to cover the costs of monitoring. Nonetheless, monitoring is costly and would be avoided in a first-best world.[14]
The model of collective production considered in the last section was for the short term, focusing on the more or less immediate rewards and punishments available to motivate behavior. But the possibility of voluntary or anarchistic cooperation in long-term interactions has been prominently argued in the literature. Taylor (1976; 1987) and Axelrod (1981; 1984) have shown that a simple tit-for-tat strategy in two-person iterated prisoner’s dilemmas can support cooperation in equilibrium without any apparent institutional structure.[15] The gist of this result is that current noncooperation can be deterred by the threat of future retaliation in kind. If one takes this “shadow of the future” argument seriously enough, the question arises as to why central authority is ever necessary.
Part of the answer has to do with two of the assumptions that make the “shadow of the future” formidable – that both parties can observe whether or not the other has cooperated and that both expect the interaction to last a long time. Both of these assumptions rely for their approximate fulfillment in the real world on the existence of appropriate institutions.
Consider first the problem of unobservability. If the players in an iterated prisoner’s dilemma can neither observe whether others have cooperated in each stage of the game nor infer this from what they can observe, then policing noncooperation by in-kind retaliation is obviously problematic. This is essentially the difficulty facing many arms control agreements. An agreement not to develop certain kinds of weapons may be concluded, but typically neither side can easily observe compliance. Moreover, neither side can observe a payoff in increased security from which compliance might be inferred. Hence, elaborate verification procedures are resorted to in an attempt to provide sufficient observability so that both sides feel they have a credible deterrent threat. The role of UN monitors in verifying the winding-down of the Iran–Iraq war provides a more explicit institutional example of the same point.[16]
A second prerequisite for decentralized, purely voluntary cooperation is that both players believe their interaction will last long enough so that the possibility of future gains can deter present noncooperation. On the one hand, this belief can be endangered in a number of ways, giving rise to a collapse of cooperation. On the other hand, it can be shored up institutionally. For example, Kreps (1985) illustrates how business firms – artificial persons with indefinitely long lives – can replace natural persons for the purposes of manytransactions.Ifanindividualhasareputationforhonestlydealingwith customers who might be cheated, there is a possibility that those customers will become nervous when they believe he or she is near retirement or death. If a firm has a reputation for honestly dealing with customers whom it might cheat, there is less reason to become nervous when the current owner nears retirement or death. This is because a firm’s reputation for honest dealing is a valuable asset, which contributes to the sale price of the firm. Thus, owners nearing retirement recognize that any cheating of customers in the twilight of their careers may cost more (in the form of a lowered sale price) than it is worth.
In addition to the problems of unobservability and shortness of interaction, which hinder voluntary cooperation even between two persons, several difficulties appear or are exacerbated when the number of players grows beyond two. The most straightforward of these difficulties is illustrated by the steady erosion of incentives to contribute to public goods that often occurs as a group’s size increases. Theoretically, this follows in models in which the importance of individual contributions declines with group size (cf. Hardin 1982).
Aninstitutionalresponsetotheproblemofmaintainingvoluntarycooperation in large groups is illustrated in the Hutterite communes. The Hutterites have developed an elaborate procedure for regularly splitting their communities whenever a certain optimal size (sixty to one hundred individuals, or about six to ten families) is exceeded (Bullock and Baden 1977). A similar emphasis on smallness (plus a bit of isolation) characterizes other successful communal lifestyles (e.g., the Israeli kibbutz).
A second difficulty, which appears in two-person prisoners’ dilemmas but is more troublesome in n-prisoners’ dilemmas, is the problem of multiple equilibria. This can be explained by adverting to one of the more remarkable results in the theory of repeated games: the folk theorem. The folk theorem, so called because it is widely known to game theorists but is of obscure authorship, deals with repeated noncooperative games (games in which the players cannot make binding agreements). Let G be a noncooperative game in normal form (two examples of such games are given in Figures 4.1 and 4.2). Denote by G∗ the “supergame” of G – that is, the game that consists of an infinite sequence of plays of the “stage game” G. Roughly put, the folk theorem states that, if the players of G∗ have enough information (in particular, at the end of each stage, they are informed of the strategy chosen by all other players in that stage), then any outcome that is individually rational can be supported by some Nash equilibrium.[17] An outcome is individually rational if the payoff each player gets is not less than his or her security level, defined as the worst payoff that can be forced upon him or her by the remaining players. Thus, very little restriction is placed on the outcomes that one might predict in a repeated noncooperative game by the notion of Nash equilibrium alone.
An industry devoted to finding such refinements of Nash equilibria as perfect or sequential equilibria has arisen in game theory in response to the problem of multiple equilibria (see, for example, Selten 1975; Kreps and Wilson 1982; Kalai and Samet 1984; Banks and Sobel 1987). It is safe to say, however, that none of the refinements of Nash equilibria proposed so far produce unique equilibrium predictions for all games. The multiplicity of equilibria, however, means that a coordination problem similar to the standardization game discussed in Section 1 can arise over which of the many equilibrium outcomes will be selected. One view of leadership (or central agency, in our terms) is as a mechanism for preventing any efficiency losses through lack of coordination (see Calvert 1985; Kavka 1987, 247).
A third impediment to decentralized cooperation in large groups is what we refer to as the group punishment feature. In a repeated prisoner’s dilemma, if one player defects at time t, the only way to punish him (if side payments are not allowed) is for some other player to defect at some future date. But any such future defection unavoidably hurts not just the original defector but all other players as well. It is therefore questionable whether collective action can hold together on a purely voluntary basis, just on the threat of in-kind retaliation. Should Ms. A really resume polluting in order to punish Mr. B’s act of pollution, given that she thereby also punishes C, D, and E? Can a group of laborers gain a reputation for reliability if shirking by any one of them is punished by retaliatory shirking? When the strategy of in-kind retaliation is carried to its logical extreme – in the so-called grim trigger strategy – cooperation is enforced by the threat of universal and perpetual defection. But if this threat ever gets carried out (in a model in which mistakes are possible, presumably), the result is tantamount to the utter dissolution of the organization. We do not believe that many important organizations are held together by a well-understood threat of dissolution should any member defect. It is possible to bypass the group punishment feature if side payments – transfers of private goods – are feasible. In this case, a defector can be punished directly: whipped, fined, ostracized, frowned upon, whatever. But the other problems – unobservability of actions, shortness of time horizons, insignificance of individual contributions, and multiplicity of equilibria – remain and may be severe.
Although institutionalizing central authority can in principle be effective in overcoming collective dilemmas, there is no guarantee that it will do so in practice.[18] Central authority can be either too weak or too strong. It is too weak when the selective incentives at its command are insufficient to deter noncooperative behavior so that the potentially capturable benefits of cooperation are not in fact captured. This is the case of the king who cannot maintain internal order, or of the businessman who cannot prevent shirking by his employees. Central authority is too strong when the selective incentives at its command allow the incumbent central agent to appropriate alltherentsproducedbycollectiveeffort,todeteranyattempttoremovehim or her, and even to extract resources produced by individual (noncollective) action. This is the case of the strong queen who, while maintaining order, extracts taxes so high that each citizen is nearly indifferent between civilized society and the war of all against all.
These twin problems besetting the institution of central authority recall Madison’s comment in Federalist No. 51: “In framing a government to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself.” When Madison wrote this the American people had recently experienced both government too strong (under George III) and government too weak (under the Articles of Confederation). His dictum simply recalled this experience to his readers’ minds. From a contractarian perspective, Madison’s statement haunts any institution that relies on central authority to solve collective dilemmas: what is the point of central authority if it fails, through weakness or through strength, to effect a Pareto improvement?
The rest of this section is devoted to a discussion of one horn of this dilemma: the problem of authority that is too strong. This problem has, of course, received considerable attention from political theorists through the years. In particular, the motivation of the central agent has received extensive attention. Alchian and Demsetz (1972) argue that the central agent will have the incentive to monitor at the efficient level only if he or she has a claim to all revenues above a certain fixed level (i.e., only if he or she is the residual claimant). Frohlich and Oppenheimer (1978) are less precise but emphasize theimportanceofthecentralagenthavingasubstantialstakeinthecollective action being organized. Some versions of the theory of absolute monarchy emphasize the king’s ultimate ownership of all land (cf. Hirschman 1977).
It is not clear that any of these techniques of motivating the central agent – giving him or her the residual claim or a substantial stake or reversionary ownership rights – adequately deals with the problem at hand. A residual claimant may profitably devote time to driving his or her employees’ wages down rather than helping to increase their productivity.[19] A political entrepreneur may sell out his or her followers. A king may prosecute ruinous foreign wars or pursue a luxurious lifestyle rather than sticking to his Hobbesian functions. These defects of motivation have given rise to a variety of institutional supplements. We shall discuss them here under three headings: establishing mechanisms for the central agent’s removal, lengthening his or her time horizon, and putting central authority into commission.
The first of these techniques is the most straightforward. If the central agent can be removed by the group whose agent he or she is, then actions detrimental to all or most of the group should presumably be discouraged. From this perspective, it is an important part of the total compensation package that CEOs can be removed by the stockholders (but not, usually, by theworkers);thatlegislativepartyleadersareelected(usually,bymembersof their party serving in parliament); that pirate captains (Ritchie 1986) and the kings of the ancient Germanic tribes were elected by their followers; and that the right of the people to overthrow “unjust” monarchs, even those reigning by divine right, was clearly understood. The practical importance of the possibility of removal of course depends on how real the possibility is. At one extreme, if removal requires revolution and revolutionaries can be hanged, then a substantial prisoner’s dilemma arises over who is to bear the cost of providing the collective good of removing the tyrant. At the other extreme, competitive and regular elections with low costs to losing challengers may impose a substantial constraint on the incumbent central agent.
Another technique of shoring up the incentives of the central agent, which complementsthepossibilityofremoval,istolengthenhisorhertimehorizon. This makes the threat of removal more potent because there is more to be lost in the future. Time horizons of kings can be extended by making monarchy hereditary.[20] Time horizons of corporate managers can be extended by the development of marketable reputations and “good will” (Kreps 1985) or by the posting of bonds (Jensen and Meckling 1976). Time horizons of politicians can be extended by attractive (but forfeitable) pension schemes, such as peerages and knighthoods in England or retirement benefits in the U.S. Congress, and by putting no limit on the number of terms that may be served.
A third and quite common technique of getting around the problem of too-strong central authority is to make it collective. The institutionally simplest way to do this is to put central authority into commission. Examples include plural executives, such as the Roman Triumvirates or the Swiss Federal Council, and corporations, whether civil, eleemosynary, business, or municipal. Institutionally more elaborate schemes fall under the rubric of “checks and balances”: the independent judiciary, the separation of executive and legislative powers, bicameralism, the independent comptroller in business firms, and so forth (Lijphart 1984; Baylis 1989; Watts and Zimmermann 1983).
The simpler examples of collective authority, where central power is shared but not institutionally divided and balanced, raise an obvious tradeoff. On the one hand, the more central agents there are, the less likely that all will collude in schemes of corruption or oppression. Ideally, each will watch the others. On the other hand, the more members there are, the smaller is the stake and say of each in collective affairs, hence the less likely that the collective action problem among the central agents will be overcome. The trick is to replace a single central agent, who ideally can convert a latent into a privileged group but who cannot quite be trusted, with a group of central agents that is (1) small enough, and in frequent enough interaction, so that voluntary cooperation in sharing the costs of monitoring can emerge; (2) composed in such a way that it can be trusted; and (3) large enough or given a large enough stake in the success of collective action so that it is viable in Schelling’s sense (i.e., each member of the group will benefit if the group cooperates in policing collective action, even if they bear all the costs themselves; cf. Schelling 1978).[21]
This chapter has surveyed theories of organizational design from several fields: the theory of political entrepreneurship from the political economy literature, the theory of the firm in the industrial organization literature, and the Hobbesian theory of the state. From this survey we have pieced together a common view of the origin and functioning of organizations.
In rough outline, this view goes as follows: Collective action in any field of endeavor can produce a surplus, in the sense that collective output exceeds the sum of individual outputs. This surplus appears in firms, for example, whenever the production process is such that what worker A does increases the marginal productivity of worker B, and it appears in armies whenever what soldier A does increases the marginal effectiveness of soldier B. Such a surplus from collective action is an incentive to collective action. Unfortunately, even if the product is private (widgets or plunder) instead of public (national defense), there is a substantial free-rider problem standing in the wayofvoluntarycooperation.Theabsenceofunusualconditions,anysingleperiod contract based solely on sharing the collective output leaves substantial incentives to shirk and free ride (Holmstr); and any multiperiod contractbasedsolelyonin-kindretaliationforshirkingisimplausibleinlarge organizations. Thus, simple sharing rules and in-kind retaliation rules cannot sustain large organizations. Some attention to the actual actions taken by the various workers, soldiers, political activists, and the like is needed.
This necessity for keeping track of the actual effort and actions taken leads to the creation of specialists in monitoring – and gives rise to the profusion of auditors, managers, and supervisors observable in all real-world organizations of any size. But quis custodiet ipsos custodes? The answer has always been to arrange the incentives of auditors so that they will in fact ameliorate collective action problems. The two basic forms this tinkering with incentives has taken are checks and balances (getting the auditors somehow to watch one another as well as those they audit) and hierarchy (placing auditors above the auditors). The latter solution of course leaves the top auditor unwatched, and here the solution has been twofold: to give the top auditor – whether general, chief executive officer, or prime minister – a substantial personal stake in the success of the collective enterprise and to provide a mechanism – coup, proxy fight, election, or whatever – for his or her removal.
5
A Theory of Legislative Parties
Definitions of political parties have been offered from two main perspectives: one emphasizing structure; the other, purpose. The structural perspective definespartiesaccordingtovariousobservablefeaturesoftheirorganization. Studies of the historical development of parties, for example, take pains to distinguish “premodern” parties from “modern” ones, typically by pointing to the increasing elaboration of extraparliamentary structures in the latter (Duverger 1954; LaPalombara and Weiner 1966). The purposive approach, in contrast, defines and categorizes political parties by the goals that they pursue. Typical examples include Edmund Burke’s definition of a party as a group of men who seek to further “some particular principle in which they are all agreed” (Burke 1975, 113); Schattschneider’s definition, whereby a political party is “an organized attempt to get…control of the government” (Schattschneider 1942, 35); and that of Downs, whereby “a political party is a team of men seeking to control the governing apparatus by gaining office in a duly constituted election” (Downs 1957, 25).[22]
Neither the structural nor the purposive definitions of parties are suited to the needs of this chapter. The structural definitions take as defining features the kinds of things that we hope to explain. Moreover, these definitions generally turn on extraparliamentary organization rather than on the intraparliamentary organization that is our main concern. The purposive definitions of party, on the other hand, assume too much about the internal unity of parties. Indeed, the more formal definitions make parties into unitary actors who single-mindedly seek to maximize votes, probability of victory, policy-derived utility, or some such maximand.[23]
The unitary actor assumption has proven valuable for many purposes – spatial models of elections and models of coalition formation come readily to mind – but it is not a useful starting point from which to build a theory of the internal organization of parties. Such a theory must begin with individual politicians and their typically diverse preferences, explaining why it is in each one’s interests to support a particular pattern of organization and activity for the party. Accordingly, we begin not with parties and postulated collective goals but rather with legislators and postulated individual goals. The task of this chapter is to explain how a party with substantial, if not perfect, coherence of collective purpose might emerge from the voluntary interaction of individual politicians. Put another way, we seek to answer the following question: how can a group of formally equal and self-interested legislators, with demonstrably diverse preferences on many issues, agree on the creation or maintenance of a party, on the organizational design of a party, and on the setting of collective goals? In answering this question, we borrow from the general perspective on organizational design developed in Chapter 4.
The (admittedly partial) answer that we give to this question can be described as either neo-institutional or neocontractarian, in the sense that these terms were used in the previous chapter. Those familiar with the economics literature will find it similar in intellectual content to the theory of the firm. We begin in Section 1 by discussing the goals of individual legislators, accepting the usual emphasis on reelection but highlighting factors that improve the reelection probabilities of all members of a given party. Section 2 notes that not enough attention will be paid to these common factors – which are public goods to members of the same party – without organized effort of some kind. In Section 3 we argue that an important reason for the existence of legislative parties is to attend to the collective component in the reelection chances of its members. The arguments we employ are abstract enoughthattheymightapplytoanumberofnationalandhistoricalcontexts. Our primary concern here, however, is suitability to the specific context of interest – the post–World War II American Congress.
Thepossiblegoalsofrationallegislatorsaremany,includingreelection,internal advancement, “good” policy, social prestige, and advancement in the hierarchy of political offices. Many studies, however, concentrate on the reelection goal, noting that reelection is typically necessary to satisfy other plausible goals. Although we do not assume that legislators are “singleminded” in their pursuit of reelection (Mayhew 1974), we do believe that it is an important component of their motivation and that, to begin with, it is reasonable to consider this goal in isolation.
The primary task of this section is to defend the notion that the probability of reelection of the typical member of Congress depends not just on such individual characteristics as race, sex, and voting record but also on the collective characteristics of the member’s party. For some, this point might be entirely unobjectionable. After all, how many empirical studies of American voting behavior ignore the partisan attachments of the electorate as unimportant? Even some who have prominently argued that the electorate is “dealigning” judge contemporary levels of partisanship to be far from the point of “zero partisanship” (see Burnham’s introduction to Wattenberg 1984, xi). And partisan attachments in the electorate imply a collective component in the reelection fates of candidates of the same party – as indicated in such venerable political science concepts as partisan “electoral tides” and presidential “coattails.”
Nonetheless, many have noted that in the twentieth century the president’s coattails got shorter and shorter as the congressional and presidential party systems have become more and more separate (Calvert and Ferejohn 1983; Schlesinger 1985). It may also be that the steady stream of articles proclaiming party decline has planted seeds of doubt about the meaning of partisan electoral tides for today’s well-entrenched House incumbents.
It is to those who entertain such doubts that we address this section. We start with a simple model in which the reelection probability of a typical House member may depend both on that member’s characteristics and on the characteristics of the member’s party. Notationally, we shall write Ri = Ri(ci; pi), where Ri represents the ith legislator’s probability of reelection, ci represents the ith legislator’s individual characteristics, and pi represents the ith legislator’s party’s characteristics.[24] This notation reflects the “holy trinity” of voting research – party, personal characteristics, and issues – but collapses the latter two factors into ci.
In order to say anything substantive about reelection, of course, we need more than this formal notation, which allows the possibility that Ri is a constant function of ci, pi, or both. We take it as uncontroversial that Ri depends substantially on ci. Any reader who finds it uncontroversial that Ri depends substantially on pi as well may skip the rest of this section. Given the literature of the 1970s and 1980s on the decline of party, however, we feel it necessary to defend this assumption explicitly.
The degree to which pi affects the probability of reelection depends, of course, on what exactly pi stands for. Our interpretation is that pi represents the public record of the ith legislator’s party. Very briefly defined, this record consists of actions, beliefs, and outcomes commonly attributed to the party as a whole. For example, issue positions adhered to by substantial majorities of the party – especially if opposed by majorities of the other party – become part of its public record. Somewhat more carefully defined, a party’s record is the central tendency in citizens’ beliefs about the actions, beliefs, and outcomes attributable to the national party.
Taking the second term (pi) first, note that party records refer to beliefs about parties, not evaluations of them. This differs from notions of party identification – certainly from older versions that hinge on early socialization (Campbell et. al., 1960), but also from revisionist versions that hinge on how votersevaluatetheoutcomesthattheyattributetoaparty(Fiorina1977).We follow Fiorina’s account of party identification in our use of the term “party record” to refer to the things that might go into a voter’s evaluative process; however, we construe these things more broadly to include actions – and even beliefs – in addition to outcomes. A party’s record, thus, is a commonly accepted summary of the past actions, beliefs, and outcomes with which it is associated. Of course, it is quite possible under this definition that some aspect of a party’s record (some particular action, belief, or outcome) will help some of that party’s incumbents, have no effect on some, and hurt still others. This does not mean that the party’s record varies from district to district, just that evaluations of it vary.
A party’s record is best understood as the central tendency in mass beliefs, rather than as a single primordial belief with which everyone is somehow endowed. Different individuals may identify the party with different actions, beliefs,andoutcomes.Somemayhavenoviewatall.Othersmayhave“erroneous” views (e.g., identifying the Republicans with more liberal policies). Nonetheless, there is generally a systematic and more or less “correct” component in mass opinions about the parties. Moreover, because district perceptions of what actions, beliefs, and outcomes should be associated with the parties are averages of individual perceptions, the systematic component in district perceptions is larger, and the idiosyncratic component, smaller. Thus, incumbents – who, electorally speaking, face district rather than individual perceptions (or other group perceptions, such as that of the reelection constituency) of their party’s record – tend to be faced with a similar perception of their party’s record, regardless of where they run. The central tendency of district perceptions is symbolized formally by pi.
Of course, the difference between the Democratic Party’s record in Alabama and in Massachusetts is rather large, which is why our definition refers to national parties. There is no doubt that the Democratic Party’s record in Alabama was influenced by George Wallace and other state party figures. The actions, beliefs, and outcomes attributed to the national party, however, can vary independently of state and local factors. The national factors that have the best-documented impact on electoral results are the state of the economy and the performance of the president. But major pieces of legislation passed on a party basis presumably have some impact as well.
National events can have an impact both because of the evaluative response of voters – no doubt mediated by press reactions – and because potential candidates and contributors anticipate voters’ responses (Jacobson and Kernell 1983). As Jacobson (1990, 4) puts it:
When national conditions favor a party, more of its ambitious careerists decide that this is the year to go after a House seat. Promising candidates of the other party are more inclined to wait for a more propitious time. People who control campaign resources provide more to challengers when conditions are expected to help their preferred party, more to incumbents when conditions put it on the defensive….Thecollectiveresultofindividualstrategicdecisionsisthatthepartyexpected to have a good year fields a superior crop of well-financed challengers, while the other party fields more than the usual number of underfinanced amateurs. The ultimate result is that general anticipations of a bad year help to bring about a generally bad year.
The logical extreme of Jacobson’s argument could take the form of a selffulfilling prophecy, with candidates’ and contributors’ responses to electoral chimeras working to transform rumor into reality. But as Jacobson (1990, 4) notes, “decisions based on illusion are hardly strategic; national conditions must have some independent effect on the outcome for the argument to make sense.”
As we noted earlier, a party’s record may affect the reelection probabilities of its members in different ways – witness the civil rights issue in the 1960s. Nonetheless, substantial components of a party’s record affect all its members similarly: for example, all are hurt by scandal or helped by perceptions of competence, honesty, and integrity; all or nearly all are helped by the party’s platform, when taken as a package. Thus, party records often can be changed in ways that affect the vast majority of party members’ reelection probabilities in the same way (either helping all or hurting all).[25]
If this claim is true, the election statistics for the House should reveal that the electoral fates of members of the same party are tied together, as suggested in the old metaphor of electoral tides. We shall now discuss three slightly different methods of testing whether this is the case in the postwar period.
Thefirstmethodoftestingfortheexistenceofelectoraltidesisthatemployed intheliteratureonthenationalizationofelectoralforces(Stokes 1965;1967; Claggett, Flanigan, and Zingale 1984; Kawato 1987). The national partisan forces found in this literature are essentially what we are looking for: their statistical definition entails that they affect all candidates of the same party similarly. Much of the literature does not bother to report tests of whether the national forces discovered are statistically significant, however. Thus, we shall briefly conduct our own analysis of variance here, focusing on interelection vote swings to the incumbent party.
The vote swing to the incumbent party can be computed for every pair of consecutive elections held in a given district, simply by taking the percentage of the two-party vote received by that party’s candidate at the later election and subtracting from it the percentage of the two-party vote received by that party’s candidate in the earlier election. If there are national factors that affect all candidates of a given party in similar fashion, then an analysis of the variance in these interelection swings should reveal a partisan effect: all Democratic candidates should tend to move together, and similarly for the Republicans.
We have examined this possibility. We shall not present the details of the analysishere,butthebottomlineisthatifpartyandyearareincludedasmain effects, along with their product as an interaction effect, all three factors are statistically significant in explaining interelection vote swings. This provides evidence that candidates of the same party do tend to be pushed in the same direction from year to year.[26]
Another way of demonstrating this sharing of electoral experience is to look at a subset of the data used in the analysis of variance – namely, those districtsinwhichanincumbentwasrunningagainstamajor-partyopponent. There were 292 such districts in the 1948 election, for example. If we regress the swing to each of these incumbents on a dummy variable equal to 1 if the incumbent was a Democrat and 0 otherwise, the resulting coefficient gives the difference between the average swings to Republican and Democratic incumbents; the associated t statistic tests whether the difference in average swings to the two parties’ incumbents is statistically discernible from zero. The difference in average swings to the two parties’ contingents of incumbents in 1948 was 14.6 percent, statistically significant at the .0001 level. Table 5.1 gives the corresponding significance levels (with the coefficient and its standard error) for all years from 1948 to 2004. The difference in swings to the two parties is significant at the .05 level (or better) in all years except
table 5.1. Partisan Differences in Interelection Vote Swings, 1948–2004
Year | Absolute Value of Estimated Coefficient of Party
Dummyb |
Standard Error | Significance Level |
1948 | 14.6 | .76 | .0001 |
1950 | 5.1 | .60 | .0001 |
1952 | 4.8 | .74 | .0001 |
1954 | 8.1 | .56 | .0001 |
1956 | 4.6 | .59 | .0001 |
1958 | 13.6 | .64 | .0001 |
1960 | 5.9 | .62 | .0001 |
1962 | 2.0 | .68 | .004 |
1964 | 10.2 | .68 | .0001 |
1966 | 14.6 | .73 | .0001 |
1968 | 1.2 | .74 | .112 |
1970 | 7.3 | .75 | .0001 |
1972 | 3.6 | 1.00 | .004 |
1974 | 13.3 | 1.05 | .0001 |
1976 | 4.8 | .87 | .0001 |
1978 | 5.5 | .98 | .0001 |
1980 | 7.2 | 1.02 | .0001 |
1982 | 7.0 | .94 | .0001 |
1984 | 8.6 | .83 | .0001 |
1986 | 6.5 | .76 | .0001 |
1988 | 2.0 | .81 | .014 |
1990 | 1.6 | .96 | .093 |
1992 | 2.2 | 1.10 | .048 |
1994 | 12.9 | .78 | .0001 |
1996 | 7.3 | .65 | .0001 |
1998 | 0.1 | .66 | .899 |
2000 | 1.0 | .62 | .118 |
2002 | 4.6 | .73 | .0001 |
2004 | 2.6 | .69 | .001 |
A third method of illustrating the existence of such a common element looks directly at the probability of winning, which we have posited to be of central concern to all incumbents. Pooling all contests with opposed incumbents in the period 1948–2004, we have estimated each incumbent’s probability of victory (using probit) as a function of two variables: the percentage of the vote garnered in the previous election, and the average swing to all other incumbents of the same party in that year (the value of the swing variable for Tony Coehlo in 1984, for example, is the average of the 1982–4 swings to all Democratic incumbents other than Coehlo). Table 5.2 presents the results of the analysis. As can be seen, the coefficient of the party swing variable is of the expected (positive) sign and statistically significant at the .0001 level.
What this coefficient means in terms of the typical incumbent’s probability of victory is explored in the lower panel of the table. Before discussing the information presented there, it should be noted that one would expect the impact of national electoral tides to vary from district to district. After all, even a very large positive swing cannot improve the chances of an incumbent already certain to win, but the same swing may substantially improve the chances of an incumbent who is in a close race. Thus, the answer to the question, “How much would a one percentage point change in the swing to an incumbent’s party change her chances of victory?” depends on the initial probability from which the change is to be made. The Initial Probability column in the lower panel gives a series of such hypothetical initial probabilities. The impact of a one percentage point decrease and of a five percentage point decrease in the swing to the incumbent’s party is given in the columns headed “1%” and “5%.” Thus, for example, an incumbent with an initial
table 5.2. Partisan Swings and Incumbent Candidates’
Probabilities of Victory
1% 5%
.99 .005 .051
.95 .018 .144
.90 .030 .208
.75 .052 .292
probability of victory of .90 would suffer a decline of .03 (to .87) were unexpected events to generate a one percentage point decrease in the swing to her party. A five percentage point decrease would produce a decline of .208 (to .692). The interpretation, of course, is not that the swing itself produces such effects but that the unobserved forces that harm other members of the party tend also to hurt the member in question. In other words, thecommonfactorsinthereelectionchancesofincumbentsofthesameparty are large enough that the chances of each can be predicted by the average experience of the rest.
The three sets of results just presented are sufficient to show both that there really is a common element in the reelection fates of incumbents of the same party and that it is large enough to be worth doing something about.
Nonetheless, two questions about these results might occur to those who view House elections as essentially local phenomena, in which the impact of any national or common element is negligible. First, it might be thought that the size of the common element will have declined substantially in and after the 1960s, with the growing importance of the “incumbency effect.” Second, it might be thought that the degree of commonality has been overstated for the Democratic Party because of an underrepresentation of southern Democrats in the data. We turn next to these two concerns.
A slight decline in the strength of partisan electoral tides can be seen in three different analyses. First, Table 5.1 (column 2) shows that the average difference in the swings to Democratic and Republican incumbents has declined a bit: from 7.2 percent in the 1950s to 6.8, 6.9, and 6.3 percent in the succeeding three decades. Second, Table 5.2 provides a probit estimation of incumbents’ probabilities of victory for the period from 1966 to 1988 (chosen because 1966 is often found in the literature on incumbency to be an important turning point). As can be seen, the coefficient on the party swing variable declines from .156 to .140, remaining significant at a high level. Third, Jacobson (1990) has estimated similar probit equations for the 1972–86 period and found quite comparable results. His equations have the additional merit of controlling for several variables not included here: whether the challenger had held previous elective office, how much the challenger’s campaign spent, and how much the incumbent’s campaign spent.[28] All told, the evidence points to only a slight decline in the magnitude of national partisan tides over the postwar period.
As for the southern Democrats, it is best to start with an account of why they are underrepresented in the data. Any analysis of House election results mustmakeadecisionregardinguncontestedraces.Wehavefollowedconventional procedure and excluded these races.[29] Because most uncontested races were in the South, and because the vast majority of southern representatives were Democratic during the period examined (especially in the early postwar years), the result is that a smaller proportion of southern than of northern Democrats who sought reelection make it into the analysis: 29 percent as opposed to 86 percent. This in turn means that the southern Democrats constitute 34 percent of all Democrats seeking reelection but only 15 percent of all Democrats in the analysis.
Because of this underrepresentation of southern Democrats, it is possible that our results overstate the magnitude of the common or national element in Democratic electoral chances. One way to test this is to look at the average interelection vote swings to three groups of incumbents – Republicans, southern Democrats, and northern Democrats – for the twenty-one election years from 1948 to 1988. The correlation between the yearly swings to the northern and southern contingents of the Democratic Party is .79 (significant at the .0001 level). By way of comparison, the correlations between the yearlyswingstoRepublicanincumbentsandtothetwogroupsofDemocratic incumbents (northerners and southerners) are −.92 and −.68, respectively. These figures suggest that the difference in electoral experience between the parties has been far larger than any internal Democratic difference.
Another way to assess the differences in electoral experience of northern and southern Democrats is to look at how well the average swing to the northerners predicts success in the South, and vice versa. If the South were sui generis, then presumably electoral tides there would not be a good clue to northern success and neither would northern tides predict southern success. Table 5.3 presents the results of a test of this null hypothesis. Equation 1 in that table is the same as the first equation in Table 5.2, except that only Democratic incumbents are included. As can be seen, the estimated coefficients for Democratic incumbents by themselves are quite similar to those for Democrats and Republicans together. The second equation in Table 5.3usestheaverageswingtoincumbentsinthe“other”regionofthepartyin place of the average swing to the full Democratic Party. That is, the value for southerners is the swing to northerners, while the value for northerners is the swing to southerners. The coefficient on regional party swing is significantly different from zero at the .0001 level but about half the size of that on “full arty swing: .078 versus .164. As shown in the lower panel of the table, this translates into impacts on probability of victory that are about half the size of those reported in Table 5.2. The conclusion to draw from this evidence is that there is some regional variation in interelection vote swings, with southern and northern Democrats facing somewhat different electoral tides. But there nonetheless remains a detectable common element so that positive tides in one region are a good clue to success in the other.
The preceding subsection provided evidence that there is a common element in the electoral chances of House incumbents of the same party. We now ask whether members of the House recognize this.
One way to answer the question is by asking members directly. Responses in interviews are not always frank or well thought out, however, and in any event we do not know of any interviews that have asked the appropriate question. Another method is to note those instances in which members seem clearly to act on the hypothesis that there is a common element in electoral politics. As Thomas B. Edsall noted (Washington Post Weekly, 27 March
table 5.3. Northern Democratic Swings, Southern
Democratic Victories, and Vice Versaa
Independent Variables | Equation 1 | Equation 2 |
Constant term | ||
Incumbent candidate’s | .089 | .086 |
vote in last electionb | (.009) | (.009) |
Average swing to | .164 | – |
incumbent’s partyb | (.016) | |
Average swing to | – | .078 |
regional Democratsc | (.008) | |
N | 3,176 | 3,176 |
Interpretation of Results |
Initial Probability | 1% | 5% |
.99 | .002 | .016 |
.95 | .009 | .055 |
.90 | .014 | .086 |
.75 | .025 | .138 |
a Only Democratic incumbents were included in the analysis. b See Table 5.2.
c For northern (southern) Democrats, this is the average of the swings to
southern (northern) incumbents.
1989, 29), for example, Newt Gingrich’s attack on Jim Wright’s ethics, seems to have been motivated by such a belief. Unfortunately, we do not know quite how to assess this kind of evidence – how many such episodes would be convincing? – and so have not pursued it.
The method that we have pursued is to allow members of Congress to speak for themselves through their retirement decisions. If partisan electoral tides are perceived by members of Congress in roughly the same fashion (so that there is rough agreement on which way the tides will be flowing), then there ought to be a negative correlation between the rates at which incumbents of the two parties retire. Examining data from 1912–70, Jacobson and Kernell (1981, 54) report that “removing the secular growth of careerism by examining change scores and omitting [the 1942 war election], we find that Republican and Democratic retirements do move in opposite directions. The −.43 correlation (significant at .01) of the partisan retirement ratio indicates a pronounced systematic component in behavior which heretofore has been viewed as idiosyncratic.” Jacobson and Kernell note that the post1970 period has seen substantial changes in retirement benefits, which have altered the pattern in retirement rates.
The argument in the rest of this chapter depends crucially on the premise that party records have at least a “noticeable” impact on the reelection probabilities of their members. We cannot quantify the degree of impact, but we can say that the stronger the reader believes the electoral impact of party records to be, the more convincing will be the arguments to come.
The evidence presented earlier should at least convince the reader that there is a common element in the electoral chances of members of the same party. This does not prove that party records must be important, of course, because there may be other mechanisms that produce a correlation between the electoral fates of members of the same party, mechanisms that are not related to or mediated through the party record and reactions to it. Nonetheless,webelievethatanyplausibleexplanationforelectoraltidesmusttosome degree involve party records and voter responses to parties as collectivities.[30]
It is not enough that what parties do – as encapsulated in their party records – affects the (re)election chances of their members. Some view national partisan swings as largely outside the control of members of Congress. For example, Mayhew (1974, 28) writes that “national swings in the congressional vote are normally judgments on what the president is doing…rather than on what Congress is doing.” He cites Kramer (1971) as showing that “the national electorate rewards the congressional party of a president who reigns during economic prosperity and punishes the party of one who reigns during adversity.” A bit later (Mayhew 1974, 30–1), he notes the difficulty of finding “an instance in recent decades in which any group of congressmen…has done something that has clearly changed the national congressional electoral percentage in a direction in which the group intended to change it.” If one accepts this view, then the prospects for the remainder of our argument – or for any argument that views congressional parties as instruments to improve the collective electoral fate of their members – are bleak.
We need, therefore, to reconsider the evidence. There are two points that bear stressing. First, although the extant literature (e.g., Kramer 1971; Tufte 1975; Bloom and Price 1975) does find that macroeconomic conditions and presidential popularity account for a substantial portion of the variation in the aggregate House vote, these variables are far from accounting for all the variation.10 Second, even that portion of the variation that is accounted for statistically by presidential popularity and macroeconomic conditions is not beyond congressional influence. If one believes that legislation can have a substantial impact on presidential popularity (or macroeconomic health) and that members of Congress are aware of this, then one must conclude that presidential popularity (or macroeconomic health) is the outcome of a game in which both Congress and the president have a role (see Kernell 1991). Members of Congress, in other words, collectively can influence the variables that influence partisan electoral tides.[31]
The argument of the rest of the chapter is simply that the element of commonality in the electoral chances of incumbents of the same party is strong
10 Kramer(1971),forexample,explainsabout64percentofthevariation.Tufte(1975)explains 91 percent but has only eight data points. Respecifications of Tufte’s model on longer time series show significantly lower R2s. Are congressional actions important in explaining that part of the variance not accounted for by the economy (and presidential popularity)? To show this positively, one would need some way of measuring what Congress does. But this is unavoidably difficult because of the nature of legislative action. Social Security legislation, for example, has not waxed and waned over the years as has the economy. It is therefore difficult to find its effect in aggregate time-series analysis – and the same problem besets virtually any issue. One might resort to some sort of analysis focusing on the point in time that the legislation was first passed. But suppose one were to find an issue that seemed to spark a noticeable gain for one party. That would beg the question of why this issue, if so profitable, was not pushed earlier. Finding an issue big enough to be clearly identifiable in the way that Mayhew (1974) demands is equivalent to finding a big mistake – a protracted failure to recognize the growing salience of the issue – by one of the parties. If the parties are actively sniffing out electoral advantage, then big issues with a clear national impact should be rare. This is not to say that congressional parties do not contribute to the record on which their collective interests ride, but only that the contribution comes in many small payments, each difficult to be sure of by itself.
enough to merit attention; parties that organize sufficiently to capture these potential collective benefits will be more successful electorally, hence more likely to prosper, than parties that do not.[32]
Before showing how the organized may prosper, however, we shall consider how the unorganized may not. We assume, to begin with, that each legislator seeks to maximize his or her probability of reelection and can take a variety of actions in the legislature (e.g., speaking and voting) that affect his or her individual reputation, party’s collective reputation, or both. Because individual reputations (ci) are essentially private goods, it is not difficult to explain why legislators undertake activities – such as pork barreling and casework – that enhance their own reputations. In contrast, the party’s reputation, based on its record (p), is a public good for all legislators in the party. This means that party reputations may receive less attention than they deserve, for the usual kinds of reasons (Olson 1965).
Consider, for example, the transition rules employed by House Ways and Means Committee chairman Dan Rostenkowski to facilitate passage of the 1986 Tax Reform Act. Certainly the Democratic members of Congress who benefited from these transition rules were in favor of them. Yet had Rostenkowski been too liberal in his distribution of this largesse, presumably there would have come a point at which the damage done to the reputation of the party as a whole would have outweighed the sum of individual benefits. Republicans nationwide would have champed at the bit to run against the party that sold out so completely to the special interests, and everyone in the Democratic Party could be made electorally better off by some package of retrenchments on transition rules and alternative, less-sensitive side payments to those bearing the brunt of the retrenchment. Yet no individual Democrat would have an incentive unilaterally to give up his or her transition rule(s), and so – in the absence of collective action of some sort – the party’s reputation on matters financial would be tarnished.
Another scenario in which both party and individual reputations might be tarnished, absent collective action, arises when legislation confers collective benefits and costs on many voters in many districts. Such legislation by definition poses at least two collective action problems that interfere with its being translated into electoral profit. First, benefits and costs are not excludable – they accrue to all citizens regardless of whether they individually have supported or opposed any legislators deemed responsible. Second, because bills are enacted by majority vote in a large assembly, no individual legislator can credibly claim personal responsibility for providing the benefit (Fiorina and Noll 1979). Both these problems make it less likely that any single legislator can turn his or her support of legislation conferring collective benefits into electoral profit. This difficulty in turn makes it theoretically less likely thatlegislationconferringcollectivebenefitswouldevergetpassed–or,more to the point, that it would ever get pushed far enough along in the legislative process so that it might actually come up for a vote.
The difficulty facing collective-benefits legislation of this kind can be exposed in the simple question: who is to bear the costs of drafting and negotiating logrolls in support of such legislation? This problem does not arise in complete information models, as can be seen in the following example.
Suppose that the majority party is divided into two factions, N and S. They face a unified opposition, R, and any two voting blocs constitute a majority. Only two bills are under consideration, N (proposed by N) and S (proposedbyS).Itiscommonknowledgethatalllegislatorsseektomaximize their own probability of reelection and that preferences over the bills are as follows (where Ns stands for the outcome in which bill N passes and bill S does not, ns means that neither bill passes, and so forth):
everyone in N: Ns > NS > ns > nS everyone in S: nS > NS > ns > Ns everyone in R: ns > Ns > nS > NS
Given these preferences, both bills will fail if everyone votes sincerely and the bills are voted on separately. But N and S can do better if they agree to package their bills and vote directly on the question, “Both (NS) or neither (ns)?” Moreover, there is no informational impediment in this model to N and S concluding this deal. Any individual in N or S would happily bear the apparently trivial costs of proposing such a package during floor consideration – and so the logroll might well occur. (The only “problem” in this model – and it does not obviously impede the logroll – is majority-rule instability: once NS is passed (or about to be passed), N and R could both do better by supporting Ns; and so forth.)
Now consider a more complex model in which (1) everyone in N wants a bill, N, whose characteristics are common knowledge; (2) everyone in N (and S) thinks that there probably exists some sweetener S that will induce S to go along with them; but (3) no one knows exactly what this sweetener is; and (4) it would be costly to “invent” an appropriate sweetener and sell it to S (and N). In this model, a free-rider problem arises for the members of N (and S): no single one of them wishes to bear or contribute to the costs of searching for the sweetener because this action is invisible to voters and the congressperson cannot credibly claim credit for it. Hence, collective-benefits legislation will be underproduced, entailing an electoral inefficiency: even though everyone in N and S could be made better off if a sweetener were produced no one wants to contribute to the costs of its production, and so none (or too little) is produced.13
In the last section, we sketched two theoretical accounts of how unorganized groups of reelection-seeking legislators might overproduce particularisticbenefits legislation and underproduce collective-benefits legislation, in an electorally inefficient fashion. We now argue that political parties can help to prevent electoral inefficiencies of this kind.
The way in which parties do this can best be seen by considering the incentives of party leaders. So far, we have assumed that every legislator seeks simply to maximize his or her probability of reelection. This assumption led directly to the inefficiency result of the last section. Yet not all reelections are created equal. The payoff to being reelected is higher if one’s party wins a majority. In addition to the obvious payoffs in terms of the Speakership and committee chairmanships, this observation is borne out by the chronic and sometimes loud complaining of the Republicans in the House of Representatives, and by the pattern of voluntary retirements from the House.[33]Moreover, there may well be a purely electoral payoff to majority status: how much less money would Democrats get from business political action committees if they were in the minority? It seems likely that they would lose
13 It is interesting, although tangential to our present purposes, to note that the free-rider problem in the production of collective-benefits legislation is prior to, and partially alleviates, the problem of instability. To get instability one needs complete and costless information about the electoral effects of all potential legislation, coupled with costless drafting of legislation. If drafting bills, communicating their characteristics (e.g., their likely effects), and negotiating logrolls are costly, then a free-rider problem may considerably reduce the supply of collective-benefits legislation – and hence the potential instruments by which instability could be revealed.
more than could be accounted for simply by the loss in members. The payoff to being reelected is also higher if one is elected or appointed to a leadership position in one’s party, rather than remaining in the rank and file. Both of these features are endogenous: majority status and leadership posts can be made more or less attractive by changes in House and caucus rules.
These simple facts – that majority status can be made preferable to minority status, that leading can be made preferable to following – suggest a rather different view of the motivation of rational legislators than that adopted in the last section. Reelection remains important, even dominant, but its importance can be modified significantly by the desire for internal advancement – definedbothintermsofaparty’sadvancementtomajoritystatusandinterms of individual legislators advancement in the hierarchy of (committee and leadership)postswithintheirparty.Ifinternaladvancementistosomeextent contingent on the servicing of collective legislative needs, then the desire for internal advancement can play the leading role in solving the problems of electoral inefficiency mentioned in the last section. We shall show how this follows in the case of the Speaker of the House (other cases being similar in general outline).
We must first select a point in time at which to analyze the Speaker’s preferences. There are two possibilities: the (short) period just after a potential Speaker is elected to Congress but before he or she is elected as Speaker, and the (long) period after the Speakership election but before the next Congressional election. In the first period, the goal of reelection to Congress has already been attained, as has the goal of majority party status. All that remainsasanimmediategoaliswinningthenominationofthemajorityparty as Speaker (which leads automatically to election by the House). In the second period, all three goals have been resolved for the present Congress, but remain to be attained in the next Congress. Of course, all three goals must be achieved anew in the next Congress. The primary difference in preferences, then, is simply one of which goal is most immediate (i.e., least discounted). We have chosen to focus on the second and longer period because it yields a technically simpler maximand. (We do not make the assumptions necessary to drive a real wedge between ex ante and ex post preferences, as does Kramer 1983; nonetheless, some similar problems arise and are discussed later.)
Given a focus on the period after the Speakership election but before the next congressional elections, we can write out the implied maximand for the Speaker of the House. We normalize the utility of failing to be reelected to the next Congress to be zero and use the following notation:
u11 = the utility of being reelected, having one’s party secure a majority, and being reelected as Speaker
u10 = the utility of being reelected, having one’s party secure a majority, and not being reelected as Speaker
u01 = the utility of being reelected, having one’s party secure a minority, and being reelected as leader of one’s
(now-minority) party u00 = the utility of being reelected, having one’s party secure a minority, and not being reelected as leader of one’s party
x = a vector of actions taken by the Speaker
R(x) = the Speaker’s probability of reelection, given x
M(x) = the probability that the Speaker’s party will secure a majority, given x, and that he wins reelection
S(x) = the probability that the Speaker will be reelected as Speaker, given x, that he wins reelection, and that his party secures a majority
L(x) = the probability that the Speaker will be reelected as leader of his party, given x, that he wins reelection, and that his party secures a minority
In terms of this notation, the Speaker’s maximand can be written as follows (we suppress the functional dependence of R, M, S, and L on x for convenience):
R[Msu11+ M(1 − S)u10+ (1 − M)Lu01+ (1 − M)(1 − L)u00]
The practical meaning of this expression is that Speakers are faced with a mixture of three motivations: maximization of their personal probability of reelection (R); maximization of the probability that their party secures a majority (M); and maximization of the probability that they are reelected as leader of their party (S and L). It is important to note that these three goals can in principle conflict but that the degree to which they do so in practice is endogenous to the majority party.
Consider first the possibility of conflict. The three goals of maximizing R,
M, and S/L differ most clearly in terms of the set of districts to which the Speaker needs to pay attention in order to satisfy those goals. To win reelection to Congress, he can focus primarily on his own district; to win reelection as leader of his party, he will probably focus on those districts that returned (or are expected to return) members of his party – representatives from these districts constitute the “electorate” for the leadership contest; to secure a majority for his party, he may consider all districts. (If the action (x) that the Speaker takes is construed to be simply the selection of a policy from a unidimensional policy space – and if some rather heroic assumptions are made, which need not detain us here – then the potential conflict between a Democratic Speaker’s goals can be expressed as follows: to maximize R he should choose x equal to the median of his own district; to maximize S/L he should choose x equal to the median of the median Democrat’s district; to maximize M he should choose x equal to the median of the median legislator’s district. The model that generates this result should not be taken too seriously, but it conveys the flavor of the possible conflict among the Speaker’s goals.15)
Despite the potential for conflict among the Speaker’s goals, they may not conflict much in equilibrium. As we show later, this is primarily because the Speaker is elected and faces competition for the post within his party.
This argument implicitly assumes that there is some equilibrium position that is best for winning one’s party’s nomination. But various instability results in the literature (McKelvey 1979; Schofield 1980; Schwartz 1986) imply that there will always exist some alternative set of actions and policies, regardless of the Speaker’s current set of actions and policies, such that some majority in the party would prefer the alternative to what the Speaker does. So why is a Speaker not always vulnerable to a “redistributive” attack from within his party? And why does this not make what is required to maximize S/L rather unpredictable, so that it is hard to say whether R and S/L conflict or not?
Our answer to the second of these questions hinges on some results in the spatial theory of electoral competition (Miller 1980; McKelvey
15 The heroic assumptions are as follows. Assume that the policy space is unidimensional and interpret the action (x) that the Speaker takes as simply the selection of a policy that he will support using the power and resources of his office. This choice is made after the election of the Speaker in a given year; he anticipates the impact that his choice will have on R, M, and S/L two years hence. In this model, what is required to maximize R is clearly choosing x equal to the expected median of the Speaker’s district. What is required to maximize M and S/L is more complicated. If we think of the individual reputation of each legislator (c) as being determined by his own choice of what policy to support, the party reputation (p) as being determined by the Speaker’s choice of policy, and make the heroic assumption that the impact of c and p on R is additively separable, then each legislator will simply choose the median of his own district. In this case, maximizing S or L requires setting x equal to the expected median of the median Democrat’s district, whereas maximizing M requires setting x equal to the expected median of the median legislator’s district. This reveals a fairly clear potential tension between maximizing M and maximizing S/L. In the much more likely case that the impact of c and p on R is not additively separable, things are less clear. For example, if voters care a lot about any divergence between c and p, maximizing M may require something like minimizing the average divergence between c and p. In this case, the tension between maximizing S or L on the one hand, and M on the other, would be lessened.
1986; Cox 1987). These results pertain to a model in which two aspirants for an elective office compete by announcing the policies that they would pursue if elected. The model is multidimensional (there are many policy issues), and so in general there will be instability; that is, any given set of policies will be vulnerable to defeat by some othersetofpolicies.McKelvey(1986),followingMiller(1980),shows that the competitors in such an election would nonetheless confine themselves to a subset of the possible policy platforms, the so-called uncovered set. The important properties of the uncovered set are two. First, the uncovered set can be small, located near the “center” of the electorate’s distribution of ideal points. Indeed, when the special conditions necessary for the existence of a multidimensional median are met, the uncovered set collapses to this single point; and when the conditions are “almost” met, the uncovered set is tiny. Second, in order to conclude that a competitor will choose a platform from within the uncovered set, one needs only to make the relatively mild assumption about motivation that no competitor will announce a platform X if there is another platform Y that is at least as successful against any platform the opponent might announce, and is better against some. That is, one need only assume that no competitor will play gametheoretically dominated strategies.[34]
The uncovered set is relevant to the problem at hand because it shows that there are definite limits to the policy platforms that those seeking leadership positions will adopt – limits much more restrictive than the full range of opinion in the party. This in turn suggests that a member whose constituency interests dictate something rather far from the competitively optimal platforms in the uncovered set is less likely to seek leadership positions – because implementing the optimal policies would be electorally hazardous – and also less likely to win them – because other members of the party will recognize the constituency conflict and therefore doubt the member’s reliability in office. Thus, we are led again to predict that leaders will be chosen in such a fashion that their personal reelection is not too incompatible with the duties of office.
The primary weakness in the foregoing argument is that it relies on results that presume a two-way contest. What if there are more than two competitors for the Speakership nomination of the majority party? Cox (1989) has shown that certain types of voting procedures (what he calls “majority Condorcet procedures”) induce candidates to adopt positions in the uncovered set regardless of the number of candidates. Although we have no formal results, we believe that the method used by the Democratic Caucus – which requires a majority for nomination – also places significant constraints on the range of policies that look good for winning the nomination.
There remains the question of why Speakers are not forever being turned out of office, as might be expected on the basis of the spatial instability theorems. The answer has to do with violations of the assumptions underlying these theorems. Instability theorems can be interpreted in two ways: either as statements about preferences or as statements about behavior. If they are interpreted as statements about preferences, then their assumptions are quite general and their conclusion compelling: there will always be some majority, all of whose members could be made better off if policies were changed. If they are taken to refer to behavior, however, they entail the assumption that any coalition, all of whose members would individually benefit were another set of policies adopted, will in fact form and take action to ensure that appropriate change is forthcoming. This assumption ignores the costs of identifying coalitions and organizing them sufficiently so that their members’ collective interests can be served. It ignores, in other words, the existence of the prisoner’s dilemma that faces any hypothetical coalition seeking to overturn the status quo.[35]
In our view, the legislative process in the House of Representatives is in important respects more like research and development than like the costless and instantaneous voting that occurs in the spatial model. We view each Speakership as embodying a certain set of policy deals within the majority party, but the alternatives to these deals are not as clear as they are in the spatial model. More to the point, attainment of one of these alternatives is not a matter of a single motion on the floor changing everything that needs to be changed all at once. It is instead a matter of many taken over an extended period of time, with many details too costly to specify in advance and ultimate success uncertain. For this reason, we view Speakerships as Hobbes viewed governments (Hardin 1991): as equilibria to coordination games rather than as equilibria to spatial voting games (or divide-the-dollar games).
Once a Speakership has been launched, the Speaker serves to police and enforce a particular set of deals. It is true that some other set of deals might be preferred by some majority in the party. But ousting the incumbentSpeakerandhisdealsandinstallinganewregimecannotbe accomplished by a single costless vote: it requires a series of political battles, each with uncertain outcome. While the revolutionary battle rages, the value of the deals struck by the old Speaker may be lost to all members of the party. Moreover, when the dust settles and a new regime is in place, the original revolutionaries may or may not have gotten what they wanted.
From a theoretical perspective, the best way to maximize the probability that one’s party will win a majority next time may very well be to concentrate on getting the current majority reelected. After all, they have shown that they can win and have all the advantages of incumbency; challengers, on the other hand, are much more risky. To the extent that this is true, of course, there should be very little conflict between the goals of maximizing M and maximizing S/L.
Thebottomlineofthisdiscussionisthat,bycreatingaleadershippostthat is both attractive and elective, a party can induce its leader to internalize the collective electoral fate of the party. In Olsonian terms, creation of a position whose occupant is personally motivated to pursue collective interests serves to make the party a privileged group.[36]
The parameters of the model make clear what promotes and hinders this internalization of collective electoral interests. The more attractive the leadership is relative to rank-and-file status (the more intraparty inequality), the more attractive majority party status is relative to minority status (the more interparty inequality), and the less the leader has to worry about personal reelection, the more completely will the leader’s induced preferences be a combination of a purely collective goal (maximizing the probability that his party wins a majority at the next election) and a goal (maximizing the probability that he is reelected as party leader) that is unattainable for those who neglect to be responsive to collective interests.
Party leadership in the United Kingdom seems to have been designed particularly well to achieve internalization. First, the inequality in power between the back benches and the front benches is quite large, so retaining the leadership is important relative to retaining a seat in Parliament. Second, the inequality in power between the majority and minority is large, so that retaining majority status is important relative to retaining a seat in Parliament. Third, important party leaders are always run in safe districts and, if they happen to lose nonetheless, are immediately returned at a by-election (some obliging backbencher having resigned his or her seat for the purpose). Party leaders thus have very little in the way of parochial electoral concerns.
U.S.partiescannotcompetewiththeirU.K.counterpartsinpurityoforganizational design. But the same principles are evident nonetheless. Intraparty power in Congress may be decentralized, but there are still lumps of it piled up in the leadership positions that are worth striving for. The minority party may be more capable of influencing legislation in the House of Representatives than in the House of Commons, but it is still decidedly preferable to be in the majority. This can be seen in the significantly higher retirement rates among minority party members. The average postwar retirement rate for the Democrats, when in the minority, was 8.91 percent; when in the majority, 7.03 percent. The comparable figures for the Republicans were 9.96 percent and 6.37 percent. A multivariate explanation of retirement rates finds most of the action not in party, but in majority status.[37] Finally, party leaders in the United States may not have a guaranteed return comparable to Margaret Thatcher’s, but how often are Speakers denied reelection by their constituents?[38]
a typical Democratic district and to be reasonably safe. From our perspective, the reason he did not distribute “too much” in the way of transition rules is because he had partially internalized the collective costs of such a course of action. He did make sure that Chicago got its share of transition rule benefits, but he did not hand out such large amounts to his own or other districts as to lessen the Democratic Party’s chances of securing a majority or his own chances at retaining his seniority on Ways and Means.
If party leaders do internalize collective electoral interests along the lines suggested, then the rest of the argument is fairly clear. Electoral inefficiencies that can potentially accumulate because of the free-rider problems inherent in legislation (of both the particularistic-benefits and collective-benefits kind) are prevented because party leaders have a personal incentive to prevent them. Thus, for example, leaders will be on the lookout for profitable logrolls within their party, for institutional arrangements that will encourage the discovery of information about potential logrolls and prevent their unraveling by bipartisan coalitions, and so forth.[39]
Our argument here has been subject to a few criticisms and questions. First, we have heard the argument that party loyalties in the electorate are not strong enough to drive the behavior of members of Congress, as we assume, particularly after the dealignment of the 1960s (the number of scholars arguing that the electoral importance of party cues declined is considerable; see Wattenberg 1998). By contrast, Bartels (2000) argues forcefully that the “decline of parties” thesis was overstated to begin with and is now badly out of date. Lipinski (2004) shows that members do indeed send constituents messages about congressional performance based largely on their partisanship; that is, they do not “run against Congress,” but they “run with their party.” And Jacobson (2000) argues that partisan voting in the electorate has gone hand in hand with partisan voting in the legislature.
Second, we have also heard the suggestion that members of Congress can avoid blame for their parties’ actions, so the collective brand name is unaffected by legislative action. Put another way, this criticism is that the negative electoral externalities that stem from sharing a label are negligible because members are highly skilled at disassociating themselves from inconvenient aspects of their parties’ overall reputations (Fenno 1978). Judging from the number of times members’ skill at bobbing and weaving has been asserted, this might seem a plausible contention. However, bobbing and weaving are costly actions. To disassociate themselves, members must expend real resources that might have been used on other tasks. Our interpretation of the evidence that members do seek to disassociate themselves from their parties is exactly opposite to what most in the literature seem to conclude. Costly action taken in pursuit of disassociation does not show that party reputations do not matter; it shows that they matter a good deal, enough to motivate the expenditure of lots of money in “damage control” exercises. The amount of money spent is a crude measure of the size of the externalities that can be (and have been) imposed on members by the twists and turns of their parties’ reputations.[40]
Third, a possible criticism of our model is that what legislative parties do is only weakly linked to what people think of them. One variant of this critique stresses that party identification at the individual level, or macropartisanship at the aggregate level, changes only slowly.[41] Another stresses that the primary force affecting the reputations of legislative parties is what the president does, not what the legislative parties do.
There are two possible responses to this criticism. One is of course to side with those who view individual party identification and macro-partisanship as more responsive to current events. One need not side completely with the revisionistshere.Itisonlynecessarytobelievethatthelevelofresponsiveness is sufficient to motivate significant effort by party politicians to cultivate and maintain a favorable collective reputation through legislative action.
An alternative and complementary line of response is to note that a majority party’s record of legislative achievement also affects the credit-claiming advantages enjoyed by its individual members. As we saw in the previous chapter, most of the major resources useful in legislating belong to members of the majority party. Because of this and the coordinative efforts of the party’s floor leadership, most bills that actually pass are pushed primarily by majority-party members (recall our discussion of bill sponsorship in the previous chapter; see also Hall 1996). Thus, merely because they are legislatively more able and prolific, majority-party members are in a better position to claim credit than are their minority-party counterparts.
In this chapter, we have articulated a view of parties as solutions to collective dilemmas that their members face. There are several points about this view that merit notice here.
First, we have focused solely upon collective dilemmas that entail electoral inefficiencies. Another perspective on parties might focus instead on collective dilemmas entailing policy inefficiencies. Like a multiproduct firm, legislatures are producing many products to sell in many different markets so that spending time and resources on one product reduces the time and resources available to spend on other products (see Rohde 1991; Aldrich 1988; Cox and McCubbins 2001; Campbell, Cox, and McCubbins 2002). Of, course, policy inefficiencies, attained by dragging down the majority party’s “profitability,” will ultimately create electoral inefficiencies, much as firms with unprofitable resource allocations have their stock value driven down by the market until they are reorganized with a more efficient allocation of resources and effort (Cox and McCubbins 2005). For the purposes of our discussion here, however, the differences between these two views are inconsequential.
Second, the collective dilemmas facing a party are “solved” chiefly through the establishment of party leadership positions that are both attractive and elective. The trick is to induce those who occupy or seek to occupy leadershippositionstointernalizethecollectiveinterestsoftheparty,thereby convertingthepartyintoaprivilegedgroup(Olson 1965)forsomepurposes.
Third, solutions to collective dilemmas (i.e., the institutions of leadership and particular elected leadership teams) are stable because they are, in essence, equilibria in n-person coordination games. Nearly everyone in the party prefers that there be some agreed-upon leadership team rather than that there be no agreed-upon leadership team, even if they disagree on which team would be best. Because each leadership team carries with it particular policy predispositions and deals, leadership stability leads to a certain amount of policy stability as well.
EBSCOhost – printed on 1/14/2020 11:20 AM via SUNY BINGHAMTON. All use subject to https://www.ebsco.com/terms-of-use
[1] We do not ignore criticisms of the neo-institutional view, but it is not our primary task to deal with them here.
79
[2] Other more decentralized institutional solutions, such as a system of property rights, are not dealt with here.
[3] We thus agree with the usage in Taylor (1987, 19) and Bates (1987).
[4] The usual games included under the rubric of “collective dilemmas” are the prisoner’s dilemma,chicken,battleofthesexes,assurancegames,andpurecoordinationgames.Theformal definition given in the text yields – in light of the folk theorem, discussed later in the text – a great many collective dilemmas when iterated games are considered. In light of the multiplicity of (inefficient) Nash equilibria in the theoretical world, it would seem sensible to adopt some stronger equilibrium concept. Unfortunately, the standard refinement of Nash equilibrium – subgame perfection – also falls prey to a folk theorem. At present, there is no well-worked out refinement of the Nash concept that avoids the multiple equilibrium problem in iterated games. Consequently, we do not pursue any refinements here.
[5] Obviously, the example of the politicians begins to stretch the usual meaning of “standardization” into something one would more usually describe as “coalition.” The point of including it is to emphasize the abstract similarity of these problems that, in game theory, are both discussed under the general heading of coordination games.
[6] If one assumes that “both switch” means that both actually switch to the other’s standard – and pay the associated costs – then it seems reasonable that this option would be ranked last, as in Figure 4.1. If one takes switching to be “giving in at the negotiating table,” then simultaneous switching would presumably give rise to further negotiation. One particularly simple form of further negotiation would be to flip a coin. In this case the entries in the Switch, Switch cell in Figure 4.1 should be 4, 4 rather than −2, −2. This change would make the game one of chicken. As it stands, it is a variant on battle of the sexes.
[7] One plausible explanation of the timing of the Corrupt Practices Act of 1883 in the United Kingdom is that the aristocratic politics of the prereform era, in which the prisoner’s dilemma couldfairlyoftenbekeptinboundsvianegotiationbetweencandidateswhoknewoneanother personally, gave way to an increasingly competitive and open system, in which the prisoner’s dilemma became less tractable to solution via repeat play.
[8] If just the right number of others is pulling, then whether or not you pull can make the difference between the boat progressing and stalling. For this reason, shirking is not a dominant strategy. Analysts generally reserve the term prisoner’s dilemma for situations in which noncooperation is a dominant strategy (e.g., Schelling 1978; Taylor 1987). Kavka (1987) calls this type of case a “quasi-prisoner’s dilemma.”
[9] There are several other major versions of the theory of the firm. A good review can be found in Tirole (1988). Here it suffices to note that collective dilemmas lurk at the heart of the other major theories as well. Consider, for example, Williamson’s (1975) notion of specific investment – an investment of time or money that will have a high range of payoffs if the investor can trade with a specific party and a low range of payoffs if the investor is forced to trade with others. If the specificity of the investment is known to the prospective trading partner, then once the investment is made said partner may be able to “hold up” the investor for essentially its full value. This, if anticipated, removes the investor’s incentive to invest. Williamson argues, among other things, that problems of investment and asset specificity – which are wrinkles on the prisoner’s dilemma – are easier to solve within firms than between them. This is one reason why firms exist.
[10] One criticism of theories that point to central authority as a solution to collective dilemmas is that they presuppose the solution of a prior collective action problem – that is, the creation and support of the central authority (Taylor 1987, 22). This is a valid point, but the collective action problem entailed in creating a position of authority is often more tractable than the original problem. In the case of the river boat pullers, for example, the men need only agree that someone be given a whip and a share of the pay. Those who refuse to contribute toward the purchase of a whip – should this be necessary – are simply excluded from the group. The whipper’s share is just as secure as that of any ordinary puller. More generally, Hardin (1991) points out that the problem of creating a state when none exists is a coordination problem (of the battle of the sexes kind) rather than a prisoner’s dilemma.
[11] The reader will notice similarities between the theory of the firm summarized in the following paragraphsandtheHobbesiantheoryofthestate,whichcontraststhefortunesofindividuals who organize into a state with those of individuals who remain unorganized (leaving all their transactions to “anarchy”).
[12] This terminology was suggested to us by our colleague Sam Popkin.
[13] When all workers work at the efficient level, the value of total output must exceed the total social cost of effort for there to be a collective dilemma in the first place. Thus there always will exist sharing rules that give each worker sufficient remuneration to cover his or her effort costs.
[14] The results regarding group production of public goods are even less encouraging as regards Pareto efficiency. Because by definition everyone consumes the entire quantity of a public good available, whether or not he or she has contributed to its production, the incentive to contribute cannot be manipulated by adjusting the share of output that an agent receives. If extreme complementarities in production exist (everyone must contribute, or nothing is produced), the efficient level of output may be achieved. Otherwise, production of the public good would have to rely on the type of selective incentives that Olson (1965) identifies and that Frohlich and Oppenheimer (1978) expand upon; that is, some central agent would have to monitor the contributions of individuals and mete out rewards and punishments accordingly. The analog to Holmstrom’s technique would require that the public good be destroyed if not produced in the efficient amount. This is hard to imagine in concrete instances. Would a group failing to clean up a local pond to the extent agreed upon then set about the costly task of restoring all the pollutants they had extracted?
[15] The equilibrium they identify is not perfect. See, for example, Friedman (1997) for a perfect equilibrium.
[16] On the UN monitors, see Maclean’s 101(2), August 29, 1988, pp. 10–17. Another important function of the UN troops was to raise the cost of violating the agreement, since this might entail casualties among noncombatant nations.
[17] The only other restriction to note is that the outcome be feasible (i.e., an outcome that is possible to attain via some strategy n-tuple in G, or via some correlated strategy n-tuple in G). See Aumann (1981).
[18] See Kiewiet and McCubbins (1991).
[19] There is a substantial literature in economics on managerial incentives in large corporations, much of it concerned with managers who maximize their own utility rather than firm profits. Major examples include Baumol’s (1962) sales maximization hypothesis and Williamson’s (1967) managerial discretion model.
[20] English law considered kings and bishops to be corporations sole, with infinite lifetimes.
[21] There are other factors that promote “good behavior” by central agents, but they are not endogenous to a single group. An example is the market for top corporate managers. Fama (1980) notes that a manager’s future remuneration depends substantially on the past performance of the firms managed. Thus, each individual firm does not need to solve the problem of managerial motivation solely by internal means; it is helped by the existence of a properly functioning market for managerial talent. A political analogue is implicit in Schlesinger’s (1966) idea of “progressive ambition.”
[22] Another approach, which defines parties in terms of the actions that they take, is pursued in Panebianco (1988).
[23] Downs is explicit in stating that his party teams are “coalition[s] whose members agree on all their goals” (Downs 1957, 25). A vast array of spatial models and studies of coalition formation also explicitly consider parties as unitary actors.
99
[24] For each legislator i in the same party, pi will be equal – but this does not mean that each legislator’s reelection fate depends in the same way on the party’s record, as we shall explain later.
[25] This is not to deny that what is good for the legislative party may be bad for the presidential party. Individual Democratic legislators can run for reelection by picking and choosing the aspects of the overall party record that they wish to emphasize, avow, or disavow. Walter Mondale, in contrast, found it hard to repel the image of a party beholden to special interests that candidate Reagan conjured up during the 1984 presidential campaign.
[26] This analysis is essentially the same as that conducted by Kawato (1987), except that he deals with a longer time period and uses the components of variance technique. Kawato also found a statistically significant national or common element in interelection swings (personal communication).
[27] We performed a similar analysis for open seats. In 1948 (again), there were thirty-five contests in which no incumbent candidate ran. The swing to the incumbent party in these districts was regressed on a dummy variable equal to 1 if the Democrats were the incumbent party and to 0 otherwise. The coefficient on the dummy variable tests the hypothesis that there is no difference in the average swing to two groups of candidates: (1) Democratic candidates defending a seat from which a Democratic incumbent has retired; and (2) Republican candidates defending a seat from which a Republican incumbent has retired. If we did not include the dummy variable and simply regressed swing on a constant term, the results would give the average swing to a party losing an incumbent candidate – that is, an estimate of what is usually called the “retirement slump.” By including the party variable, we test whether the slump a party suffers upon retirement of one of its incumbents is worsened – or offset – by national partisan swings. The results can be summarized as follows: prior to 1966, all but two of the party coefficients are significant at the .05 level; afterwards, as one would expect from the literature on the “incumbency effect,” all but two of the party coefficients are insignificant. The last year in which open seats were identifiably affected by national partisan trends was 1974.
[28] Jacobson (1989) performed a probit analysis over the entire 1946–86 period in which time trend interaction terms were included for all his variables. He found a statistically significant declineinthepartyswingcoefficient,butthedeclinewasnotparticularlylargeinasubstantive sense.
[29] An uncontested race is defined as one in which only one major party candidate seeks election to the seat in question.
[30] How would an explanation for electoral tides go that made no reference to party records? One might suppose that Republicans do worse on average than Democrats in some given year because most of them have supported some specific policies that their constituents have judged harshly. But then one must ask why more Republicans than Democrats were unable to predict what the reactions of their constituents would be to their legislative actions. If all politicians are equally good at catering to their constituencies, then tides of this type should rarelyoccur.AnotherpossibilityisthatmostoftheRepublicansboughtintoaparticularpolicy stand that events then undermined. Voters do not think of the policy as a Republican policy, they just think of it as a failed policy, and most of the candidates who supported it happen to be Republicans. This scenario of course provides what are seemingly ideal conditions for collective responsibility to be assigned, and one must ask why it is that voters blame individual Republicans for the failure of a policy to which they as individuals contributed only one vote.
[31] Another route to showing that not all the action is extracongressional is to run the probits in Table 5.2 again, including economic variables and presidential approval ratings. We have done this and found no change in the size and significance of the party swing variable. The common element in the electoral fates of incumbents of the same party cannot be explained simply by economic and presidential variables.
[32] A somewhat different starting point for a theory of parties would see them as organizations designed to facilitate passage of those policies that members of the party held in common. We do not intend to deny the validity of this approach by pursuing the one that we do in the text. Rather, just as in the literature on party behavior, it seems fruitful to pursue an analytical policy of “divide and conquer” – considering the main motivations behind party development one at a time. For research on parties as vehicles for producing policy, see Brady and McCubbins (2002; 2006).
[33] Majority status, not party, predicts retirement rates. This can be shown as follows. Let the dependent variable be the retirement rate (computed, for a given party and Congress, as the percentage of all that party’s sitting members who do not seek reelection, for some reason other than death). We have two observations per Congress, for a total of forty-two. Regress this dependent variable on the following independent variables: Party (= 1 for the Democrats, 0 for the Republicans); Majority Status (=1 if the party controlled the House, 0 otherwise); and Presidential Status (=1 if the party controlled the presidency in November of the election year ending the Congress, 0 otherwise). The result can be expressed as follows: Retirement = 8.99 − .04∗Party − 2.67∗Majority Status + 1.87∗Presidential Status. The t statistics for Party and Majority Status were .03 and 1.95, respectively. Given that there was considerable collinearity between the Party and Majority Status variables (the Democrats were almost always in the majority), the results are surprisingly strong. They indicate almost no partisan effect and a substantial majority status effect: holding constant other variables, majority status is worth a decrease of 2.67 percentage points in the retirement rate of a party. The Presidential Status variable reflects the federal appointments that are available to a representative whose party controls the presidency, and has a t of 2.30. For a discussion of democratic retirements following the Republican Revolution, see Jacobson (1996a; 1996b).
[34] Of course there are other assumptions, for example regarding voters and the nature of competition, that must also be accepted. These, too, seem relatively mild. McKelvey used a somewhat restrictive assumption about voter utility functions to derive the result, but this has been relaxed by Cox (1987).
[35] For an explication of this line of argument, see Lupia and Strom (1995) and Lupia and McCubbins (2005).
[36] An example of how this logic might play out in practice can be given by continuing the example of the Tax Reform Act of 1986. Dan Rostenkowski, as chairman of the committee on Ways and Means, was clearly in a position of great authority and power. This position has been to some degree elective for quite some time. Rostenkowski could be said to be from
[37] See note 15.
[38] The answer is not often. Prior to Tom Foley’s removal in 1994, the last Speaker to be denied reelection by his constituents was William Pennington, Whig Speaker in the Thirty-sixth Congress (1859–61).
[39] Note that in the logrolling example given earlier, the Speaker’s preferences would plausibly be NS first, regardless of whether he was in the N or S faction. This is because he internalizes the reelection probabilities of all parts of the party. If true, then the logroll has an element of stability: the party leadership is interested in preserving it and will presumably seek to scuttle any legislation that would unravel it.
[40] If expenditures on disassociation imply significant externalities (and significant party reputations), does the lack of expenditures on disassociation then imply that there must be no externalities (and weak party reputations)? Logically, (p → q) does not entail (∼p →∼q). Thus, one would have to argue separately for the latter relationship.
[41] For the debate on how sensitive party identification at the individual level is to party accomplishments, see Miller and Stokes (1963) and Fiorina (1981). For the debate on how sensitive macro-partisanship is to political events, see Kinder and Kiewiet (1981) and Kiewiet (1983).
Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?
Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.
Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.
Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.
Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments!
While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.
Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.
In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.
Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.
We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!
We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.
Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.
We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.
Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.
There is a very low likelihood that you won’t like the paper.
Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.
We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.
You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.
We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.
You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.
Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.
You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.
The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more