Wednesday, January 14, 2009
John Holdren describes energy as "indispensable," "reliable," "affordable"
Master Resource, January 14, 2009
From time to time, John Holdren has acknowledged that plentiful, affordable, reliable energy is vital to human well being. Indeed, there is no going back to an energy-poor world. (Remember: caveman energy was 100% renewable.)
When Holdren or Obama advocates policies that risk making energy artificially scarce or less reliable, these words can be used to argue for nonregulatory approaches to energy policy:
“Virtually all of the benefits that now seem necessary to the ‘American way’ have required vast amounts of energy. Energy, in short, has been our ultimate raw material, for our commitment to economic growth has also been a commitment to the use of steadily increasing amounts of energy necessary to the production of goods and services.”
- John Holdren and Philip Herrera, Energy (San Francisco: Sierra Club, 1971), p. 10.
"When energy is scarce or expensive, people can suffer material deprivation and economic hardship.”
- John Holdren, “Population and the Energy Problem,” Population and Environment: A Journal of Interdisciplinary Studies, Spring 1991, p. 231.
“Energy is an indispensable ingredient of material prosperity. . . . Where and when energy is in short supply or too expensive, people suffer from lack of direct energy services (such as cooking, heating, lighting, and transport) and from inflation, unemployment, and reduced economic output.”
- John Holdren, Population and the Energy Problem,” Population and Environment: A Journal of Interdisciplinary Studies, Spring 1991, p. 232.
“Supplying energy to the economy contributes to the production of a stream of economic goods and services generally supportive of well-being.”
- John Holdren, “Coal in Context: Its Role in the National Energy Future,” University of Houston Law Review, July 1978, p. 1089.
“A reliable and affordable supply of energy is absolutely critical to maintaining and expanding economic prosperity where such prosperity already exists and to creating it where it does not.”
- John Holdren, “Memorandum to the President: The Energy-Climate Challenge,” in Donald Kennedy and John Riggs, eds., U.S. Policy and the Global Environment: Memos to the President (Washington, D.C.: The Aspen Institute, 2000), p. 21.
“Affordable energy in ample quantities is the lifeblood of the industrial societies and a prerequisite for the economic development of the others.”
- John Holdren, “Meeting the Energy Challenge,” Science, February 9, 2001, p. 945.
[For full text see original link above.]
Energy Savings Through American Chemistry
The U.S. business of chemistry is unique. We use energy to save energy. We are the principal supplier of materials that make the U.S. economy more energy efficient. From insulation materials, roof coatings, lightweight vehicle parts and energy-saving tires; to appliances, light bulbs and materials for wind and solar power, our industry is essential to the nation’s efforts to save energy and reduce greenhouse gas emissions. As one of America’s most energy-intensive sectors, we’re improving energy efficiency and reducing greenhouse gas emissions in our own operations.
- Building insulation materials made from chemistry save as much as 40 BTUs of energy for every BTU of energy consumed to make the material. House wraps save 360 BTUs of energy for every BTU used to make the material, and foam insulation can make a home up to 70% more energy efficient.
- Every pound of plastics and composites used to “lightweight” an automobile produces 2-3 pounds of weight savings in that vehicle.
- “Low rolling resistance” tires are made by adding chemistry products—silica and polysulfidosilanes—to tire tread to help increase fuel efficiency.
- Automotive and industrial lubricants rely on chemistry products to help reduce friction and energy usage.
- Solar power relies on silicon-based materials and other chemistry products.Wind power blades contain many chemistry products, including polyester and resin additives.
- Chemistry-intensive roof coatings help reflect solar heat away from the rooftop, promoting cooler indoor spaces.
- Compact fluorescent light bulbs, made with chemistry to “fluoresce” (give off light), use 70% less energy than conventional light bulbs and last 10 to 20 times longer.
- Appliances such as refrigerators and air conditioning equipment contain chemistry, including insulation and coolants, that has helped improve their energy efficiency by 30 to 50% since the 1970s.
- Vinyl windows have excellent thermal performance properties, while vinyl-coated wire and cable have high electrical resistivity, helping to prevent energy losses.
One way the business of chemistry is improving its energy efficiency is through the use of combined heat and power (CHP), also known as cogeneration. CHP is the simultaneous generation of electricity and heat from a facility located near the manufacturing facility. Because most CHP facilities use natural gas and create two forms of energy (electric power and steam) with the same amount of fuel, they are often twice as efficient as older, coal-burning electric utilities. CHP is responsible for nearly 25% of our industry’s power requirements.
Through the Responsible Care® program, a global chemical industry performance initiative implemented in the United States through the American Chemistry Council, we require members to report energy efficiency and greenhouse gas emissions intensity data to ACC. Through its web site, www.americanchemistry.com/responsiblecare, these companies are making available the most performance information of any private sector industry group.
Hybrids Don't Get the Mileage They Say They Do
Planet Gore, January 14, 2009
Honda is being sued because its hybrid isn't performing as advertised:
Green car owners have apparently complained in such large numbers that the Honda Civic Hybrid isn’t living up to high mileage claims that the carmaker has approached U.S. government regulators about revising its mileage guidelines, according to a lawsuit by one Honda hybrid owner.
A California appellate opinion filed on Monday showed that a Honda customer service representative told Gaetano Paduano, the dissatisfied owner of a 2004 Honda hybrid, that the company had received “a high number of complaints” that the sedan achieves significantly less than its promised mileage of 47-plus miles per gallon.
The rep also told Paduano that the company and rival Toyota have approached the U.S. EPA to change the mileage rating on their hybrid cars, the opinion said.
Another Honda rep told Paduano that he probably couldn’t achieve the advertised mileage by driving the vehicle like a conventional car, despite claims to the contrary in a Honda brochure advertising the vehicle, the opinion said.
A Honda spokesman would not comment on the pending litigation.
Iraq's Accession to the Chemical Weapons Convention
Press Statement
Sean McCormack, Spokesman
Washington, DC, January 14, 2009
The United States welcomes Iraq’s decision to become the newest party to the Chemical Weapons Convention (CWC), which will become effective on February 12, 2009. Iraq will be the 186th member of the CWC’s Organization for the Prohibition of Chemical Weapons (OPCW), leaving only 9 states as non-members. This is an historic decision by the new Iraq government to join the international community in efforts to eliminate all chemical weapons and their production facilities. The United States has supported Iraq in its preparation to join the CWC and will continue to assist Iraq with its implementation. We welcome Iraq into the OPCW and look forward to working with Iraq in moving closer to the complete elimination of chemical weapons programs throughout the world.
2009/049
Brillon case: A perverse legal incentive goes before the Supreme Court
A perverse legal incentive goes before the Supreme Court.
WaPo, Wednesday, January 14, 2009; page A16
FROM 2001 through 2004, Michael Brillon was represented by six different publicly funded attorneys. He fired two, claiming they did little-to-no work on his case; he threatened the life of a third -- forcing his withdrawal -- and had two others leave. As a result, Mr. Brillon, who had three prior felony convictions, spent three years in jail before, with the sixth lawyer representing him, he was convicted by a jury of second-degree aggravated domestic assault and sentenced to 12 to 20 years in prison. He was given credit for the time he served before his trial.
The Vermont Supreme Court threw out Mr. Brillon's conviction and barred prosecutors from retrying him after concluding that Mr. Brillon's right to a speedy trial had been violated -- even though the delays were the fault of Mr. Brillon or his lawyers. The court concluded, in part, that because the public defenders were paid with state money, the state must be held responsible. The Supreme Court, which heard arguments in the case yesterday, should overrule the decision.
Public defenders provide an indispensable service. They have been overworked and underfunded for far too long, and it is little wonder that some may not be able to competently manage all of their cases. This neglect risks travesties of all kinds, including the incarceration of innocent people. But letting defendants and their lawyers benefit from delays of their own making would create a perverse incentive for them to drag their feet. Some may question the likelihood of a defendant's choosing to remain behind bars without being convicted, but it is not far-fetched to imagine such a calculus from those who, as Mr. Brillon did, face long sentences.
Linking dismissal to time factors alone is a bad idea, but a defendant must have an opportunity for relief if a public defender's neglect is significant or serious delays are caused because of lawyers' being cycled through a case. Defendants who can prove that such delays and neglect substantially harmed their cases would be entitled to new trials. These kinds of motions are rarely granted, but they provide a safety valve in extraordinary circumstances.
George Miller on Secret Ballot In Union Recognition Elections: 2001
OpenMarket, CEI blog, September 11, 2008
…looks very different than it does in the United States. And by Miller, I mean Rep. George Miller (D-Calif.).
In 2001, Rep. Miller and several of his colleagues wrote to a labor adjudication body in the Mexican state of Puebla urging that body to uphod the secret ballot as “absolutely necessary to ensure that workers are not intimidated into voting for a union they might not otherwise choose.”
Yesterday, the Mexican Supreme Court agreed, ruling that workers’ secret ballots in union organizing elections must be protected, reports Laborpains.org.
What does this have to do with the U.S.? Legally, nothing, but politically, plenty. The same protection that Miller so strongly defended for Mexican workers is the same one he is now trying to take away from American ones, through his sponsorship of the mandatory card-check organizing bill known as the Employee Free Choice Act.
Miller’s double-talk notwithstanding, he does deserve credit for one thing: He was right once. Now if only he could apply such wisdom inside his own country’s borders.
For more on card check, see here and here.
Conservative views: The Myth of the (Bush) Imperial Presidency
AEI, Tuesday, January 13, 2009
How will President Barack Obama view his powers once in office? Certainly, the majority of his supporters have argued that President George W. Bush abused the office's powers and expect Obama to take a more modest view of his authority. Yet, in times of war, when U.S. security is threatened, presidents typically push their executive powers forward. This is something the Founders surely understood. While Bush could have been more skillful in making the case for his policies and in his dealings with Congress, he did not exceed his authority.
Turn to most any political science book on American government or legal casebook in constitutional law, and the impression one gets is that the U.S. Constitution has largely been a dead letter for some time now. It is dead, scholars argue, either because the document was written so broadly to begin with that it can be interpreted in any way a politician or justice feels is necessary, or because the principles that originally informed the Constitution are so passé that they rightly need to be ignored if the country is to be effectively and democratically governed.
But against this current of thought runs the actual practice of American politics. In particular, one finds that when presidents make decisions in the area of national security that are unpopular or become unpopular, the opposition will turn to the Constitution to decry the president's supposed misuse of his intended powers. Most often, the charge is that the president has become "imperial."[1] Instead of being an agent of the law and confined within the strictures of the Constitution, a president will stand accused of standing above the law and bypassing our system of checks and balances in the mode echoing past royal rule. Most famously, this was the charge leveled at President Richard Nixon (and to a more limited degree his predecessor, Lyndon Johnson) during the Vietnam War years; it has been resurrected to apply to the Bush administration and its handling of the Iraq war and, more generally, the war on terror.[2]
Hence, it turns out that the Constitution is not so dead--at least when it comes to challenging presidents in time of an unpopular war. Although the instinct to challenge the government's use of power by looking to the Constitution is a healthy one, it remains to be seen whether the actual reading of the Constitution is accurate when applied to presidential powers.
Generally speaking, those who see the president as acting in an imperial manner view the presidency as having limited powers and requiring the assent of Congress in some fashion or another before taking action in critical national security matters. The president is a small "e" executive, occupying an office largely designed to carry out the decisions of the other branches of government.
Against this view, however, stands the commentary of perhaps the greatest analyst of the American political system, France's Alexis de Tocqueville. After describing in volume one of Democracy in America the weakness of the presidency at the time he visited the United States in 1830, he commented:
If the executive power is less strong in America than in France, one must attribute the cause of it perhaps more to circumstances than laws. It is principally in relations with foreigners that the executive power of a nation finds occasion to deploy its skill and force. If the life of the Union were constantly threatened, if its great interests were mixed every day with those of other powerful peoples, one would see the executive power grow larger in opinion, through what one would expect from it and what it would execute.[3]
Tocqueville concluded by noting that "the president of the United States possesses almost royal prerogatives, which he has no occasion to make use of, and the rights which, up to now, he can use are very circumscribed: the laws permit him to be strong, circumstances have made him weak."[4]
For Tocqueville, then, the "whiggish" or weak view of the presidency is misleading. In fact, the president's authorities are inherently substantial--needing only the right state of affairs to be drawn out and exercised. In the 1830s, the country was focused on domestic affairs, and, as a result, presidents of the time were in practice less important than Congress. Issues such as tariffs and internal improvements were bound to be fodder for the House and Senate to exercise their unique talents in debating, horse-swapping, and mustering up legislation to codify the consensus they had reached. But looking more deeply than even his American contemporaries, Tocqueville saw in the presidency an office that would become increasingly commanding once the United States rose to the great-power status he believed it would eventually achieve.
What the Framers Had in Mind
At first reading, Tocqueville's understanding of the presidency appears to be at odds with that of the founding generation, who, after all, revolted against what they saw as the British monarchy's despotic use of its powers. And, indeed, concerns about the potential executive abuse of power did lead them to establish a federal government (Articles of Confederation) that had but a single congressional body to handle all its affairs and create a slew of new state governments in which the vast majority made the governor decidedly subordinate to the legislature.
But a decade's worth (1776-87) of unstable governments and ineffective governance led many in that same generation to rethink the utility of a unitary, independent chief executive. Under the Articles, for example, Congress exercised a variety of powers, some clearly legislative in nature and others concerned with directing the new republic's foreign and defense affairs. It was these latter concerns--which they consistently and explicitly described as being "executive" in nature--that in fact took a considerable bulk of the members' time. Here, time and again, they found the lack of an independent, single executive to be debilitating. Negotiations with foreign powers were needlessly prolonged and muddled, there was poor execution of the decisions that they did make, the war effort was complicated by the constant meddling of Congress, and it proved difficult to keep secrets.[5] As a result, by the time the Constitutional Convention met in the spring of 1787, a priority for key members of that founding generation was to establish an executive that could, in the words of Federalist 70, act with "decision, activity, secrecy, and dispatch." Unlike the Articles, which had collapsed legislative and executive powers into a single body (Congress), the new constitutional system, based on the separation of powers principle, was intended primarily not to check executive power but actually to free it up.
As noted above, it is a misperception that the founding generation thought that executive power was limited to the administration of a government's laws. Although still working through how exactly to best institutionalize executive power in a republican government, it consistently kept clear the distinction between legislative, executive, and judicial kinds of powers. These powers might overlap in some cases, but, on the whole, when the Founders talked about executive power, they did so in the manner of Locke and Montesquieu, meaning that the holder of that power was concerned not only with the execution of the laws, but also with foreign and defense affairs.[6]
For example, in 1785, when James Madison was asked for his advice about what Kentucky's constitution should look like once the territory became a state, he admitted that he was perplexed when it came to describing the post of governor. The reason he gave is that, other than the responsibility of executing the laws, "all the great powers which are properly executive" had already been "transferred to the federal Government," referring to the contemporary Congress's control of foreign and defense affairs.[7] Similarly, when the delegates to the Constitutional Convention first met, they were presented with an initial outline for the new constitution by the Virginia delegation, the so-called Virginia Plan. Under this plan, the new federal executive was both to "administer" the laws and to exercise the "Executive Rights vested in the Congress" of the Confederation.[8]
Indeed, the real problem facing the Convention was not what constituted executive power, per se, but how to create an executive office that could properly carry out what Madison had called "the great powers" and yet not create a monarch in practice. This is the reason why, when today's constitutional scholars and jurists look to the Convention for guidance on what a particular presidential power might mean, they find little with which to work. The delegates' real focus, on which they spent the vast majority of their time when it came to the presidency, was devising an office that could wield those authorities effectively and responsibly, not defining what those powers were.
For the Founders, the president's list of authorities mattered, but so did the character of office that would exercise them. There was no better example of this "lesson learned" from the period following 1776 than the contrast between the leadership exhibited by the governor of New York and the lack of leadership from Pennsylvania's executive. Although the two executives had a similar list of authorities, the New York governor was considered to be the strongest of the state executives, while Pennsylvania's was judged to be quite weak, if not the weakest. The key difference was that the former was a single executive, elected for a three-year term, with no limits on reelection and limited veto power; meanwhile, the latter was a body of twelve, with rotating membership, no prospects for immediate reelection, and no veto power. New York's governor was the state's unquestionable leader politically, and it was this institutional model that served largely as the basis for the presidency that emerged from the Convention.
The importance of the institutional aspects of the presidency is highlighted by the most authoritative analysis of the Constitution, the Federalist. In the seventieth essay, Publius argues that "energy in the executive is a leading character in the definition of good government" and that "the ingredients which constitute energy in the executive are, first, unity; secondly, duration; thirdly, an adequate provision for its support; fourthly, competent powers." Note that "powers" is only one of four elements that go into making a capable chief executive; the other three features are aspects of the office.
When attempting to understand presidential behavior, commentators will typically overlook this aspect of the Founders' work.[9] As the Federalist makes clear, the goal was to create an institution that would have not only powers, but also the independence and interest in using those powers in a forceful and purposeful way. The Constitution's drafters were not interested in an executive who would sit on his hands; they wanted a president who could check the legislature if need be, give direction to the policymaking process as required, and, most critically, meet unforeseen contingencies and threats to the country's security. While the Constitution does not guarantee that a president will always take the initiative in these areas, it does virtually everything it can to allow, even encourage, a president to do so.
Hence, however much one may agree or disagree with the particular decisions made by Bush in the wake of the attacks of 9/11, his overall behavior is consonant with what the Constitution's framers would have expected from a president facing such a threat. And while Bush began his term with the expectation that America was in a period of "strategic pause" and his own policies would be of a more "humble" sort, as Tocqueville noted, the pull of circumstances necessitated a much different agenda and correspondingly different vision of the powers he needed to exercise.
An Imperial President?
Critics of the Bush presidency tell a different story, of course. They see an administration that has ignored standing law; violated treaty obligations; undermined the most basic of civil liberties; and, on the whole, used its powers to avoid congressional, judicial, and public oversight. And perhaps worst of all, it used its administrative sway over the intelligence community to foster a case for going to war in Iraq that was built on a tissue of lies.
Although the last charge may be the most serious, it is also the one easiest to rebut. The fact is, the Bush administration's case against Saddam--be it the failure to abide by standing United Nations resolutions, continuing to work on banned weapons of mass destruction (WMD), or consorting with known terrorists and terrorist organizations--is virtually identical to the case President Bill Clinton's national security team was publicly and aggressively making in 1998 when it believed it might have to go to war with Iraq. Moreover, whatever the debates within the intelligence community over this or that aspect of Iraq's WMD program, the overwhelming consensus within the community (as well as the intelligence agencies of our major allies) was that he was still engaged in trying to develop those capabilities. And although it turned out to be an inaccurate picture, the main reason for the flawed estimates was Saddam's own efforts to hide the fact that he no longer had such capabilities, believing the deception would serve to deter either Iran or the West from taking military action against him.[10]
As for ignoring the law, the most commonly cited example is the White House's decision to engage in warrantless electronic surveillance, ignoring in the process the requirements of the Foreign Intelligence Surveillance Act (FISA) to get a court order before targeting any U.S. citizen for surveillance. Indeed, the White House did ignore that law, believing both in intent and design FISA had "grown dramatically out of date."[11] As more than one commentator has correctly noted, FISA was a law intended to help convict known spies, not prevent terrorist acts.[12]
But the president's case for ignoring the specific law was not built on expediency alone. First, it was by no means a stretch for the administration to argue that when Congress passed the Authorization for Use of Military Force (AUMF) resolution in the aftermath of 9/11, it was providing the president with general war-making authority against al Qaeda and its supporters and that, as such, it also carried with it adjunct powers necessary to fulfill the president's obligation "to deter and prevent acts of international terrorism against the United States" as stated in the resolution's preamble. In Hamdi v. Rumsfeld (2004), for example, the Supreme Court read the AUMF as authorizing not only the use of force, but also powers traditionally thought to be incidental to that broader mandate, of which the collection of intelligence has traditionally been one.
Moreover, as the FISA court itself recognized in a 2002 review, all previous federal appellate courts have "held that the President did have an inherent authority to conduct warrantless searches to obtain foreign intelligence information. . . . We take for granted the President does have that authority and, assuming that is so, FISA could not encroach on the President's constitutional power."[13] The Court's view was once Congress's view as well. When Congress passed the Omnibus Crime Control and Safe Streets Act in 1968, which detailed among other things the procedures federal law-enforcement authorities had to go through to get a wiretap, the law explicitly stated that nothing in this act "shall limit the constitutional power of the President to take such measures as he deems necessary to protect the Nation against actual or potential attack."
According to the president's critics, however, Bush upped the imperial ante not only by bypassing standing statutes, but also by violating treaty obligations and in so doing suspending perhaps the most fundamental of constitutional rights, that of the writ of habeas corpus.[14]
In the first instance, the administration is accused of failing to abide by the Third Geneva Convention in its handling of captured al Qaeda and Taliban fighters. That accord, signed by the United States in 1949 and subsequently ratified by the U.S. Senate, sets out in fairly extensive detail how prisoners of war are to be treated. Along with the Fourth Geneva Convention, which governs the treatment of civilians during war, and an addendum to the Third Convention, known as Protocol I, which extends prisoner-of-war status to captured guerrilla fighters, the Geneva Accords seem to leave little room for denying fighters and terrorists captured on the battlefield of Afghanistan or inside Pakistan the rights set out under the conventions.
President Ronald Reagan in 1987, however, explicitly rejected submitting Protocol I to the Senate for ratification. In doing so, the administration maintained an older view that there was still a useful distinction to be maintained between soldiers who fought in uniform under a legitimate sovereign authority and those combatants, such as terrorists, who did not. Here the Reagan administration was in step with a prior Supreme Court decision that "unlawful combatants" were "deemed not to be entitled to the status of prisoners of war" and as "offenders against the law of war subject to trial and punishment by military tribunals."[15]
Nor was the Bush administration flying against precedent and case law when it set up the detention facilities at Guantanamo Bay and created a virtual state of limbo for the detainees. Over the years, in both Republican and Democratic administrations, the base at Guantanamo had been used to house for indefinite periods Haitian and Cuban refugees whom the government intended neither to put on trial nor to allow into the United States. Both the Clinton administration and the earlier Bush administration had used Guantanamo precisely because they believed doing so put the detainees outside the reach of the U.S. courts and, hence, precluded challenges to those detentions by writs of habeas corpus.
Here again, the Bush administration had court precedent on its side. In 1950, the Supreme Court explicitly rejected a habeas appeal by foreign combatants held by American forces outside U.S. sovereign territory. In the majority opinion, Justice Robert Jackson dismissed the appeal (made by German prisoners convicted of war crimes by an American military commission operating abroad) by pointedly noting that there was "no instance where a Court, in this or any other country where the writ is known, has issued [a habeas corpus writ] on behalf of an alien enemy who, at no relevant time and in no stage of his captivity, has been within its territorial jurisdiction. Nothing in the text of the Constitution extends such a right, nor does anything in our statutes."[16]
Even Bush's use of "signing statements"--written pronouncements issued by a president upon signing a bill into law--to announce his intention not to be bound by some feature of the law on constitutional grounds is not an administration invention. Signing statements have been around since the time of James Monroe, although they have come into much wider use during the past four presidencies. It was the Clinton Justice Department that wrote: "If the President may properly decline to enforce a law, at least when it unconstitutionally encroaches on his powers, then it arguably follows that he may properly announce to Congress and the public that he will not enforce a provision of an enactment he is signing."[17]
And while civil libertarians decry (sometimes justifiably) how the FBI and other federal law enforcement agencies are employing the new investigative powers granted them by the Patriot Act and other laws, those measures were duly enacted by Congress, and they remain on the books. For all the worries about abuse of power, nothing this administration has done compares with previous executive branch responses to perceived threats on the home front--be it Abraham Lincoln's suspension of habeas corpus during the Civil War, Franklin Roosevelt's mass internment of Japanese immigrants and U.S.-Japanese citizens at the start of World War II, or the so-called Palmer Raids in which Woodrow Wilson's administration rounded up thousands of suspected anarchists and leftists after World War I.
As these last examples suggest, the assertion of presidential power may not always be wise and may well be constitutionally debatable. But it is a virtual certainty that, in times of war or when American security is perceived to be directly threatened, presidents will ignore laws they see as violating their larger constitutional duties to wage war successfully and to protect the homeland. And, in turn, they will use gray or uncharted areas in the law to push their executive powers forward. Given how little prepared the United States was in its laws, institutions, and tools to deal with the global jihadist threat following 9/11, it should come as no surprise that Bush moved as aggressively as he did. Undoubtedly, neither the Constitution's framers nor Tocqueville would have been surprised.
Principle and Prudence: Lessons from the Past
Bush's presidency has not been the imperial presidency its critics claim. Indeed, on the most important issue--going to war, both in Afghanistan and Iraq--he has sought and received congressional approval. Lest we forget, there have been any number of presidents (Harry Truman in Korea and Clinton in Kosovo, to take but two obvious examples) who have gone to war without the sanction of Congress or the traditional justification of using the military to protect American lives or property. And, again, while the president's harshest opponents do not accept the legal and constitutional grounds the administration has put forward to justify his decisions, any objective assessment would conclude that Bush has acted within the general bounds of what the laws and precedents permit.
If there is any kernel of truth to the charge that Bush's presidency has been imperial, it rests with the fact that the Constitution invites presidents, through its broad delegation of "the executive power" and the office's unique institutional features, to be assertive in times of crisis. But if so, then what does it mean to call something "imperial" when it is fueled by the fundamental law of the land itself?
Take, for example, how George Washington handled the first major foreign policy crisis that arose under the new constitution. Fearing that a general war in Europe in 1793 between revolutionary France and the surrounding monarchic powers would inadvertently drag the young (and unprepared) United States into the conflict, Washington issued on his own authority the Neutrality Proclamation. He then followed up the proclamation with decisions and a set of administratively instituted rules intended to enforce it. In doing so, Washington was asserting a prerogative to interpret treaty obligations that involved matters of war and peace and to constrict the behavior and commercial activities of private citizens, all done without the color of any guiding law or consultations with Congress.[18]
Then, of course, there is the example of Lincoln during the country's greatest crisis, the American Civil War. Lincoln, on his own authority and guided by his presidential oath "to preserve, protect and defend the Constitution," spent monies never appropriated by Congress, suspended the writ of habeas corpus, created military commissions to try civilians, issued the Emancipation Proclamation, and openly defied the Supreme Court by refusing to abide by Chief Justice Roger Taney's order to uphold the writ in the case of Marylander and Southern sympathizer John Merryman.[19]
It is also useful to note, however, that in time both Washington and Lincoln turned to Congress for its stamp of approval for the actions they had taken in the immediate stages of the crisis each faced. In Washington's case, he waited until Congress was scheduled to come back into session (December 1793)--more than six months after the proclamation's issuance and time enough for his decisions to take hold and allow for the American public's desire to come to France's assistance to cool down. In contrast, Lincoln called Congress back into special session for July 4, 1861, five months before the regularly scheduled session. Yet he did so in mid-April, allowing more than two and a half months for the members to make their way to Washington. "At a time when telegraphs and railroads were common, Congress surely could have been convened sooner. Implicitly obliged to convene Congress to deal with the national crisis, Lincoln used both the independence of his office and the discretion built into the convening authority to give himself a bit more leeway--and thus a bit more power to control the national government's response to the crisis--than would have been the case if Congress had been convened at the earliest possible moment."[20]
With these two examples, we see presidents exercising their authorities to the fullest but in a manner that also seems prudent constitutionally. Yes, the Founders had created a presidency designed to take the lead in such instances and empowered the president with both the authorities and autonomy to do so, but they also set in place a potential check on the president's decisions by creating a system in which Congress or the Supreme Court would play "Monday morning quarterback"--exposing malfeasance when called for; adding or cutting off funds when necessary; passing laws to regularize the exercise of executive discretion without undermining it; and, in the face of truly egregious behavior, being ready to impeach a president. It is a wise president who understands that reality as well.
At some point, it is inevitable that Congress or the Supreme Court will want its say in these critical matters. Reading the Constitution as though it is simply a legal document in which the powers of each branch are distributed in a 100 percent coherent, black-and-white manner, as many a presidential lawyer is wont to argue, is a recipe for problems down the road. The fact is, the broad discretion a president wields in many of these instances will bleed into the rights and authorities of the other branches and, hence, will be seen by them as something to be contested. Moreover, as history shows, having Congress confirm what a president does in times of an emergency is not particularly difficult; nor is it a zero-sum game in which one is simply conceding to Congress the right to define a presidential prerogative. What we remember and cite in the cases of Washington and Lincoln is, after all, not what measures Congress eventually passed, but what actions the two presidents took.
Since the 1970s there has been a good deal of confusion about the appropriate division of powers under the Constitution when it comes to foreign and defense affairs.[21] The Bush White House has been right in principle to push back against the idea that Congress is either its equal in this policy area or that it has the power to define exactly how a president may exercise his authorities. But could it have been more adroit in how it went about making that case? Almost certainly.[22] The cost has been new laws and new Supreme Court decisions that, instead of upholding precedents and authorities friendly to the exercise of presidential power, have begun to curtail them. Whether the issue is one of electronic surveillance, military commissions, due process rights for detainees, or even interrogation techniques, what the president once might have gotten without much compromise has become a partisan issue in which the president is given less discretion. Again, it is understandable why the administration took the positions it did. What is less certain is whether it was prudent to do so.
Gary J. Schmitt is a resident scholar and the director of the Program on Advanced Strategic Studies at AEI.
Full article w/references: http://aei.org/publications/pubID.29190,filter.all/pub_detail.asp
Conservative Views On Reducing Poverty by Revitalizing Marriage in Low-Income Communities
Heritage, January 13, 2009
Special Report #45
[C]hildren living with single mothers are five times more likely to be poor than children in two-parent households. Children in single-parent homes are also more likely to drop out of school and become teen parents, even when income is factored out. And the evidence suggests that on average, children who live with their biological mother and father do better than those who live in stepfamilies or with cohabiting partners.... In light of these facts, policies that strengthen marriage for those who choose it and that discourage unintended births outside of marriage are sensible goals to pursue.
President-elect Obama, the collapse of marriage is the most important social problem facing the nation. When the War on Poverty began in the 1960s, 7 percent of U.S. children were born outside of marriage. Today, the number is 38 percent. Among blacks, it is 69 percent. You are in a unique position to reverse this alarming trend.
The decline of marriage is a major cause of child poverty. Roughly two-thirds of poor children live in single-parent homes. Marital collapse is also a major contributor to welfare dependence: Each year, government spends over $250 billion for means-tested welfare benefits for single parents.
When compared to similar children raised by two married biological parents, children raised in single-parent homes are more likely to fail in school, abuse drugs or alcohol, commit crimes, become pregnant as teens, and suffer from emotional and behavioral problems. Such children are also more likely to end up on welfare or in jails when they become adults.
Revitalized marriage can have a powerful impact in reducing poverty in low-income communities. For example, if poor women who have children out of wedlock were married to the actual fathers of their children, nearly two-thirds would be lifted out of poverty immediately.[2] Because the decline in marriage is linked to many other social problems, an increase in healthy marriage would to lead to a long-term drop in those problems as well.
Given these facts, policies that strengthen marriage for those who are interested and discourage births outside of marriage are indeed sensible. But the first step in developing such policies must be to look beyond the many misperceptions that cloud the issue. Effective policy must be based on facts.
Fact: Out-of-wedlock childbearing is not the same problem as teen pregnancy. Although 38 percent of children are born outside of marriage, only about one in seven of these non-marital births occurs to a girl under age 18. Most out-of-wedlock births occur to men and women in their early twenties. Half of the women who have children out of wedlock are cohabiting with the father at the time of birth; 75 percent are in a romantic relationship with the father.[3] Policymakers seeking to reduce out-of-wedlock births must look far beyond teen pregnancy.
Fact: Few out-of-wedlock births are accidental. The overwhelming majority of young adult women who have a non-marital birth strongly want to have children. Although they are ambivalent about the best timing, they want and expect to have children at a fairly young age. Most are also interested in marriage, but they do not see marriage or a stable relationship as an important precondition to having a baby. To a significant degree, the decision to have a child outside of marriage is a deliberate choice for these women.
Fact: Lack of access to birth control is not a significant factor contributing to "unintended pregnancy" or non-marital births. A recent survey of low-income women who had had a non-marital pregnancy found that only 1 percent reported that lack of access to birth control played a role in the pregnancy.[4]
Fact: Out-of-wedlock childbearing is concentrated among low-income, less educated men and women. In general, the women most likely to have a child without being married are those who have the least ability to support a family by themselves.
Fact: Although the decline in marriage is most prominent among blacks, it is also a serious problem among Hispanics and lower-income whites: 44 percent of Hispanic children and 25 percent of white children are born outside of marriage.
Fact: Low male wages and employment are not the principal cause of out-of-wedlock childbearing. The overwhelming majority of non-married fathers were employed at the time of the child's birth. Over half earn enough to support a family above the poverty level without the mother working at all.[5] Before the child's birth, the fathers-to-be, on average, earned more than the mothers-to-be. If, as some argue, the fathers were not economically prepared to support a family, the mothers were even less prepared. Other factors such as social norms concerning marriage, life-planning skills, and relationship skills play a far greater role than male wages in promoting out-of-wedlock childbearing.
Fact: Out-of-wedlock childbearing is not the result of a shortage of marriageable males. Nearly 40 percent of all American children, and 69 percent of black children, are born outside of marriage. The sheer magnitude of the problem undercuts the argument that it is caused by a shortage of marriageable men. The decline in marriage in low-income communities stems from changing social norms and from a welfare system that for decades has penalized marriage, not from a lack of millions of marriageable men.
Government should help low-income couples to move toward more prosperous lives by providing such men and women with education that increases their understanding of the strong link between marriage and better life outcomes and that equips them to make critical life decisions concerning childbearing and family formation more wisely.
Paradoxically, most low-income men and women who are likely to have children out of wedlock have favorable attitudes toward marriage: If anything, they tend to over-idealize it. However, many low-income couples do not believe that it is important to form a stable marital relationship before conceiving children and bringing them into the world. They also tend to believe that haphazard cohabiting relationships are likely to endure and flourish when, in reality, this seldom occurs.
Many low-income individuals choose to have children first and then work on finding suitable partners and building strong relationships. They fail to understand that this pattern is not likely to be successful. Most low-income young women, in particular, strongly want children and hope those children will grow up to enter the middle class, but they fail to appreciate the vitally important role a healthy marriage can play in boosting a child's success.
In The Audacity of Hope, you wrote:
[R]esearch shows that marriage education workshops can make a real difference in helping married couples stay together and in encouraging unmarried couples who are living together to form a more lasting bond. Expanding access to such services to low-income couples, perhaps in concert with job training and placement, medical coverage, and other services already available, should be something everybody can agree on.[6]
You were exactly right. By and large, young low-income men and women aspire to have strong, healthy marriages. They also seek upward social and economic mobility. Marriage education can help at-risk individuals appreciate the role that healthy marriage can have in meeting long-term life goals and can enable them to make decisions about childbearing that best match their life aspirations. These programs can also provide training in life partner selection and in skills that help to build healthy enduring relationships. Such programs should not be regarded as imposing alien middle-class values on the poor, but rather as providing vital tools to help individuals fulfill their real life goals.
You have also written, "most people agree that neither federal welfare programs nor the tax code should penalize married couples."[7] Again, you are right. Given the private and social benefits of marriage, it is absurd for the welfare industry to penalize marriage. Yet that is exactly what welfare does.
Specifically, welfare programs create disincentives to marriage because benefits are reduced as a family's income rises. A mother will receive far more from welfare if she is single than if she has an employed husband in the home. For many low-income couples, marriage means a reduction in government assistance and an overall decline in the couple's joint income. Marriage penalties occur in many means-tested programs such as food stamps, public housing, Medicaid, day care, and Temporary Assistance to Needy Families. The welfare system should be overhauled to reduce such counterproductive incentives.
Now is the time for action. You and your Administration, by launching the following specific initiatives, can help to revitalize marriage in America.
- Recognize that the key to arresting the decline of marriage in the U.S. is moral leadership. Use the White House bully pulpit to reaffirm the value and importance of marriage. You are uniquely suited to this task. Your strong personal affirmation of values will prove critical in transforming anti-marriage norms and in promoting a long-overdue renewal of marriage in low-income communities.
- Use the bully pulpit to emphasize the historical importance of marriage within the black community. Remind the nation that even at the height of Jim Crow segregation prior to World War II, nine out of ten black children were born to married couples. Warn the nation that the same decline in marriage that afflicted black communities a generation ago is now battering low- and moderate-income white communities.
- Encourage public advertising campaigns on the importance of marriage that are targeted to low-income communities.
- Provide marriage education programs in high schools with a high proportion of at-risk youth. Most low-income girls strongly desire to have children. They also wish and intend to be good mothers. These young women will be very receptive to information that shows the positive effects of marriage on long-term child outcomes. Such education could be funded under the current "healthy marriage initiative" program at the U.S. Department of Health and Human Services (HHS).
- Make voluntary marriage education widely available to interested couples in low-income communities. This could be done by expanding the small "healthy marriage initiative" currently operating in HHS. These programs may also provide job training to participants, but that should not be their primary emphasis.
- Provide marriage education referrals in Title X birth control clinics. Government- funded Title X clinics operate in nearly every county in the U.S., providing free or subsidized birth control to over 4 million low-income adult women each year. Many clients of these clinics go on to have children out of wedlock within a short period. With 38 percent of children born outside of marriage, it is obvious that a policy of merely promoting birth control is highly ineffective in stemming the rise of non-marital births. In addition to providing birth control, Title X clinics should be required to offer referrals to education in relationships, marriage, and life-planning skills to clients who are interested.
Reduce the anti-marriage penalties in welfare. The simplest way to accomplish this would be to increase the value of the earned income tax credit (EITC) for married couples with children; this could offset the anti-marriage penalties existing in other programs such as food stamps, public housing, and Medicaid.
Conclusion
More than 40 years ago, Daniel Patrick Moynihan, then a member of the White House staff under President Lyndon Johnson, warned of the impending collapse of the black married family. He predicted the social calamities that this collapse would bring. Moynihan was right, but in subsequent decades, as the problem mushroomed, the nation largely hid its head in the sand and ignored the devastation. In the four decades since Moynihan's warning, the government has done almost nothing to protect or restore marriage.
Today, the collapse of marriage about which Moynihan warned so long ago is escalating rapidly across other racial groups. Forty years of neglect and silence is enough. You now have a unique opportunity and ability to halt this destructive trend and to take the first decisive steps to restore marriage in our society.
Robert Rector is Senior Research Fellow in the Domestic Policy Studies Department at The Heritage Foundation.
Full article w/references at the original link.