Sunday, November 28, 2010

Real vs Virtual: The Coming Great Divide?

Humans divide along many lines—gender, politics, race, religion, football teams. Another division is, I think, becoming increasingly important, one defined by what things matter: Real vs Virtual.


From one side, what are important are real world accomplishments—planting a tree, bringing up children, doing a useful job, writing a book. Games, online or elsewhere, can be a pleasant form of entertainment, but accomplishments in them don’t count towards whether you feel that you are, in a metaphorical sense, paying for the space you occupy, the air you breath, whether you will be entitled to die with a sense of accomplishment, a life well lived.


Seen from the other side, real world activities—earning enough to pay for food, housing, an internet connection and a WoW subscription—are merely necessary inconveniences, absorbing time that might be better spent getting your characters to level 80, killing the Litch King, growing your guild.


I put the distinction as real vs virtual because that is a particularly striking version, but the issue is both broader and older than online gaming and first came to my attention in a very different context. I am a long time participant in the SCA, an organization that does historical recreation. Some of my fellow participants are able and energetic people who earn their living at one or another not particularly interesting or demanding job while putting their real abilities, energy, passion into their hobby. Other examples of the same pattern can be found in the worlds of bridge playing, science fiction fandom, “horse people,” and many others. Accomplishments exist in, for the most part only in, the context of the particular game, subculture, activity. Training a horse is a real world activity. But in a world where horses no longer function for transport or pulling plows, it is, in an important sense, no more real than learning to be very good at killing enemy players in World of Warcraft. The point is point encapsulated in the story of the man who explained that he played golf to stay fit. Fit for what? Golf.


I have made the distinction sharper than it really is. When my daughter translated a 15th century Italian cookbook, she was contributing to the SCA game. But she was also adding one more crumb of knowledge to historical scholarship and, in the process, fulfilling a requirement for her college, which had a one month winter term which students were supposed to spend doing approved projects. Even in the case of purely virtual activities, one player’s activity in WoW, building a guild or leading a raid, contributes to the entertainment of other players. Arguably that is a real accomplishment in the same sense that writing and publishing a novel is.


Nonetheless, I think the division is real, important, and based on a disagreement about values, about what matters. It is in that sense a religious division. And it is one that may become increasingly important as improvements in the relevant technologies make possible and attractive something close to a fully virtual life, the experience machine that Robert Nozick described in his Anarchy, State and Utopia. If some people are living most of their life online, getting most of their feeling of worth and accomplishment from virtual achievements, while others continue to base theirs on things done in realspace, how will the two sorts regard each other?

Wednesday, November 24, 2010

TSA: The Problem of Trust

The Transport Safety Administration, the President, the Secretary of State, and very nearly everyone else agrees that that the full body searches and alternative pat downs the TSA has started to implement are intrusive. The TSA, however, insists that they are a necessary precaution to prevent future aircraft bombings.

This would be a persuasive argument if the rest of us had any reason to take claims by the TSA seriously, but we don't. Whether or not this particular requirement makes sense—I have seen arguments by people better qualified to judge than I am who think it does not—enough past requirements were clearly security theater rather than security to destroy any claim the organization might have had to be trusted.

To take the earliest and most striking example, the TSA used to, for all I know still does, interpret the rule against knives to cover the inch long nail files sometimes built into nail clippers, with the result that anyone who happened to have a nail clipper with him and did not want to trash it was required to let them break off the file. To take a long continued example, the TSA insists that its agents be able to search our luggage but has failed to take the most elementary precaution to keep them from pilfering valuables—including in the note enclosed in searched luggage a number identifying the agent who searched it. In these ways and others, the organization has demonstrated that its concern, insofar as an organization can be said to have concerns, is with something other than the welfare of the people it claims to protect.

And, for the latest example, the TSA initially insisted that the new search requirements applied to pilots as well as passengers. Only after someone pointed out to them that a pilot who wanted to crash the plane he was flying didn't need explosives to do it—and, more important, after it became clear that enough pilots were unwilling to go along with the requirement to provide, at the least, a very serious public relations problem—did they reverse that part of their policy. The implication is either an organizational IQ at the idiot level or, more plausibly, an organization more concerned with image than substance.

Trust, once lost, is hard to get back.

Wednesday, November 17, 2010

Translucent Backs for Laptops?

I'm writing this post on an 11" MacBook Air, in a car on the highway; my wife is driving and I am connected to the internet via the moving hotspot generated by my Android phone. It's bright out and I was having some trouble reading the screen—with the exception of a patch in the middle, which had a brighter background and so provided more contrast, a patch produced by the translucent white Apple logo on the screen

Which suggests an interesting possibility. Why not make the whole lid out of a translucent material, thus providing a brighter background at no cost in power. It would only work in a bright environment—but then, that's when it is most needed.

Eventually it occurred to me to check the setting for screen brightness. With that turned up, the rest of the screen is readable too. But the translucent back still seems as though it might make sense.

Perhaps under some circumstances it would be a disadvantage, although I'm not sure what those would be. If so, use the high tech version. Two layers of polarizing material, one of which can have its polarization switched, vertical to horizontal or left handed to right, to make the combination opaque.

Monday, November 15, 2010

Tax Cuts: A Prediction

Just for fun, I thought I would establish my prediction of the outcome of the current disagreement over tax cuts, to demonstrate how much I do (or don't) know about political strategy.

The agreement will be a temporary continuation, probably for two years, of the Bush tax cuts for everyone.

Why? Because the alternative compromise is a permanent extension for the lower income group, temporary for the high. Do it that way and when the temporary expires, the issue can be presented as whether to give rich people a tax cut--not whether to give everyone one. That's a strong political position for the Democrats, hence one the Republicans will be reluctant to set up.

Sunday, November 14, 2010

Sustainability: Part II

A commenter on my previous post informs me that:
The generally accepted definition comes from the Brundtland Report, which defines sustainable development as: "development that meets the needs of the present without compromising the ability of future generations to meet their own needs".
There are two problems with this definition. The first is that implementing it requires us to predict what the future will be like in order to know what the needs of future generations will be. Consider two examples:

1. The cost of solar power has been falling steeply. If that fall continues, in another couple of decades fossil fuels will no longer be needed for most of their current purposes, since solar will be a less expensive alternative. If so, sustainability does not require us to conserve fossil fuels.

2. A central worry of environmentalists for at least the past sixty years or so has been population increase. If that is going to be the chief threat to the needs of future generations then sustainability requires us to keep population growth down, as many have argued.

A current worry in developed countries is population collapse, birth rates in many of them being now well below replacement. With the economic development of large parts of the third world, that problem might well spread to them. If so, sustainability requires us to keep population growth up, to protect future generations from the dangers of population collapse and the associated aging of their populations.

It's easy enough to think of other examples. Generalizing the point, "sustainability" becomes an argument against whatever policies one disapproves of, in favor of whatever policies one approves of, and adds nothing beyond a rhetorical club with which partisans can beat on those who disagree with them.

There is a second and related problem with the definition: whether it is to be defined by individual effects or net effects. If a particular policy makes potable water less available to future generations, with the result that many of them get drinking water in bottles rather than from the tap, but also makes future generations enough richer to more than pay the cost of that bottled water, is that policy consistent with sustainability?

Or consider the issue of global warming. Assume that it can be slowed or prevented, but at the cost of slowing the development of much of the world. To make the point more precise, suppose that global warming imposes an average cost on future generations of 10 utiles (or whatever unit you prefer to use to measure the ability of future generations to meet their own needs), but the policies that prevent it impose a cost of 20. Is permitting global warming sustainable? Is preventing it?

If we define sustainability in terms of individual effects, treating as unsustainable anything which makes future generations less able to meet any one of their needs, there may be no policies at all that are sustainable, since each alternative alters the future in different ways and any alteration is likely to be bad in at least one respect. If, more plausibly, we define it in terms of net effects, then the demand for sustainability turns into the demand that we not follow policies that make future generations worse off than the present generation.

What policies make future generations better or worse off is one of the things people who argue about policy disagree about. It was obvious to a large number of intelligent and thoughtful people early in the past century that socialism made people better off; it is obvious to most such people now that it had the opposite effect. Similarly with current arguments over almost anything, from gay marriage to genetically engineered crops. "Sustainability" becomes an argument for both sides, each interpreting it by its view of the consequences of the policies it supports or opposes.

Not only does the requirement of sustainability add nothing useful to the conversation, it takes something away. It implies that the one essential requirement is making sure our descendants are as well off as we are; whether they end up better off than we are, as we are better off than our ancestors, is relatively unimportant. That surely impoverishes any serious discussion of policies that affect future generations.

I am grateful to the commenter for providing me with a definition, but it does not alter my conclusion. To regard sustainability as a useful and important goal is indefensible.

Saturday, November 13, 2010

Sustainability

My university is very big on sustainability. A quick search of its web site failed to produce any clear definition of the term, but I think a reasonable interpretation, based on the word itself and how I see it being used, is that it means doing things in such a way that you could continue doing them in that way forever. If so, the idea that sustainability is an essential, even an important, goal strikes me as indefensible.

To see why, imagine what it would have meant c. 1900. The university existed, it had a lot of students and faculty. None of them had automobiles. Many, presumably, had horses. Sustainability would have included assuring a sufficient supply of pasture land for all those horses into the indefinite future. It might have included assuring a sufficient supply of firewood. It would, in other words, have meant making preparations for a future that was not going to happen.

The same is true today. Making sure we can continue our present activities into the indefinite future makes sense only if we believe that we will be doing those things into the indefinite future. Judged by what we have seen in the past and can guess about the future, that is very unlikely. We do not know what the world of forty or fifty years hence will be like, but it will not be the same as the present world, hence it is very unlikely that we will be doing the same things in the same way and requiring the same resources to do them with.

The issue was recently brought to my attention when a colleague at a faculty meeting gave a glowing description of all the good things that were being done or planned in support of sustainability, up to and including a future teach in. I asked him one question—whether any part of the plans included presentations of arguments against sustainability. His answer was that any arguments against sustainability would be presented by speakers who were in favor of it.

That is not how universities are supposed to function.

Wednesday, November 10, 2010

Carrying a Laptop: A Social Puzzle

I am writing this post on my newest acquisition, an 11” Macbook Air. It’s a lovely piece of machinery, very small and light and surprisingly powerful—it can, for instance, run World of Warcraft at respectable frame rates without my having to turn the graphics settings down low.


It is small, but not small enough to fit in any of my pockets. The obvious solution is to buy or make a carrier for it, a cloth or leather pouch big enough for the computer and perhaps its charger, with a strap that leaves my hands free. Which raises some design questions.


Given the size of the computer and the shape of the human body, the most convenient way to design such a pouch is lying over my chest, a reasonably flat area of about the right width, supported by a strap going behind my neck. Baby carriers are often made that way, with some additional straps—most babies are heavier than most laptops. WWI soldiers carried front packs as well as back packs; for all I know soldiers still do. But I do not think I have ever seen a case for a laptop, or papers, or anything similar designed to hang in front of the carrier’s chest. All such seem to be designed either to go on your back or to hang by your side. What I don't know is why.


Pursuing that question further, I try to imagine making and wearing a case for my new toy designed along what seem to be to be sensible lines. My response to the imagined scene is discomfort, embarrassment. I feel as though I would be doing something odd, violating some unspoken norm. People would look at me oddly.


Why I feel that way, and why carry bags are designed as they are, I do not know. One possibility is that it is because the most common such device is a woman’s purse and women’s purses normally hang at their sides, perhaps because women’s breasts would get in the way of a frontal design. But that is only a guess, and one made by a not very observant guesser; perhaps some women’s purses are designed in the way I have just suggested that they are not and I have just not noticed them.


My best evidence is inside my head, where I seem to have internalized a norm mandating bad design for the case for my new toy.

Sunday, November 07, 2010

Clothing Naked Statues: An Instructional Fable

In a recent online discussion, I came across the following, with reference to John Ashcroft:

"Wasn't he the one who ordered clothing be put over statues of women with naked boobies?"

Curious, I did a quick Google, and located the relevant information on Wikipedia. The fact on which the story is based involves not statues of women with naked boobies but one statue of one woman with one naked boobie, a representation of the spirit of justice located in the headquarters of the Department of Justice. It was veiled not with clothing but with curtains that could be used to block the view of the statue during speeches, when it would otherwise have been a feature of the background. The curtains were initially installed not by Ashcroft but by, or at least during the tenure of, his predecessor, although the installation was made permanent under Ashcroft.

Or in other words, most of the content of the story, with its implication of Ashcroftian puritanism, is bogus. Which I take as support for my general rule of thumb: Regard with suspicion any historical anecdote that makes a good enough story to have survived on its literary merit.

And just to balance myths of left and right, I note the recent widely circulated claim that Barack Obama's visit to India is costing $200 million a day. It's a good story, and obviously fits well with the view of Obama as fiscally profligate. But its sole basis seems to be a single news story from India, sourced to an anonymous "top official of the Maharashtra Government privy to the arrangements for the high-profile visit." Which did not prevent it from being widely published as fact in online (and, I presume, print) sources in the U.S.

By now most reputable media that mention it have reported it as bogus—but twenty years from now, if the Internet is still functioning more or less as it now does, the story will be alive and well. For my favorite example of the phenomenon, this time a deliberate prank by one of the 20th century's greatest journalists, google the Bathtub Hoax.

The Most Expensive Research Project Ever

As in observer of, but not a participant in, the macro wars of the past fifty years, I was struck by the way in which 1960's Keynesianism, largely abandoned by academic economists due to its inability to explain real world events, was suddenly revived in response to recent economic difficulties. Not only revived, but presented by its supporters, including President Obama, as what all economists believed in—a claim that provoked a newspaper ad signed by a large number who didn't.

The truth is that, as Gary Becker puts it in a recent blog post, "The unpleasant fact we economists have to face is that there is not strong evidence on the actual effects of governmental spending on employment and GDP. The usual claimed effects are generally based on predictions from highly imperfect theoretical models of the economy rather than from strong direct and clear evidence on the employment consequences of different fiscal stimuli."

Which is one reason why, as Becker goes on to point out, the contrast between the current policies of the U.S. and the U.K. should be of considerable interest to economists. The U.S. policy, confidently predicated on the theory that deficit spending reduces unemployment, has been to greatly expand both spending and deficit relative to their long term levels, with spending up from about 20 percent of GNP to about 25 percent. The U.K., in contrast, is proposing to reduce its deficit, mainly by reductions in spending, by about 1.5% of GNP for each of the next four years; if they carry through, they will have cut spending, relative to GNP, by about as much as Obama and Bush have increased it. At the same time, Obama's fiscal policies are combined with a substantial increase in government involvement in economic affairs, most notably in health care, while Cameron is at least proposing a decrease.

There are, of course, lots of other differences between the U.S. and the U.K. But such a drastic difference in policy ought to produce results large enough to outweigh them. If the current position of the U.S. administration and its economists is correct, their policy should decrease unemployment by a substantial amount while the U.K. policy increases it by a comparable amount. If on the other hand, as I suspect, the U.S. recession has been so severe not despite Obama's policies but because of them, and similarly for the Great Depression and FDR, the prediction reverses. It will be interesting to see.

Economists should be grateful to President Obama and Prime Minister Cameron for arranging something reasonably close to a scientific experiment on the effects of government policy. Other people might look at the matter somewhat differently. Whichever of the two turns out to be wrong will have imposed a very large cost, quite possibly in the trillions of dollars, on the population he is experimenting on.

Which is the justification for the title of this post.

To be fair, there is a competing claimant to the title. One could view the communist states of the 20th century as a similar research project, this time on the effect of central economic planning. Seen from that point of view, measured in human cost and perhaps also measured in dollars, it was a still more expensive experiment.

Three Wrongs Don't Make a Right: Thaler on Estate Taxes

In a recent New York Times piece, Richard Thaler discusses alternative ways in which the estate tax might be revised. I have no strong opinions on optimal taxation, other than wanting it to be as low a possible, but it struck me that one part of his argument was wrong in an interesting way. Thaler writes:
"First, it is incorrect to say the estate tax amounts to double taxation. The wealth in many large estates has never been taxed because it is largely in the form of unrealized — therefore untaxed — capital gains. A 2000 study found that for estates worth more than $10 million, unrealized capital gains represented 56 percent of assets."
The problem with this is that capital gains are calculated on nominal, not real, values. To see why that matters, consider someone who bought an asset in 1981 for $100 and sold it in 1998, the year the study's figures are based on, for $200. On paper, he has a capital gain of $100. But over those seventeen years, prices doubled; $200 in 1998 was worth the same amount as $100 in 1981. His real capital gain is zero. If instead he sold the asset for $300, the capital gain reported on his schedule D will be $200, his real capital gain only $100.

As you can check by downloading the study Thaler cites (his link only gives you the abstract), its figure of 56 percent of assets was calculated using the conventional definition, hence consists largely of imaginary capital gains. One cannot tell how large the overestimate is without additional information on when and for how much the assets were bought. If we assume that my imaginary asset bought for $100 in 1981 and worth $300 in 1998 is typical, Thaler's figure is off by a factor of two—real capital gains represent 28% of those estates, not 56%, which makes his dismissal of the double taxation argument substantially less persuasive.

Why does all of this matter? It matters because what Thaler is implicitly arguing for in this part of his piece is balancing one error in the tax code with another, while ignoring a third.

What are the three errors, seen from the standpoint of measuring and taxing the real gains from buying and selling assets?

1. The failure to index capital gains, to measure them in real rather than in nominal terms. At a zero inflation rate this wouldn't matter, but if inflation is substantial it taxes investors on imaginary profits, heavily discouraging any form of investment activity that will eventually show up on a schedule D.

2. The failure to retain the basis for capital gains when an asset is inherited. Under current law, when my imaginary investor dies in 1998 and his son inherits his $300 asset, the basis for the asset shifts up, so neither the real $100 gain nor the imaginary $100 gain ever pays capital gains tax.

3. The estate tax. Instead of paying capital gains tax on either the real or the imaginary capital gains, the son is taxed on the amount of the estate, some unknown fraction of which consists of actual capital gains. This is double taxation on part of the estate, single taxation on another part, and, given the exemptions in the estate tax law, zero taxation on a third part.

Richard Thaler's piece is offering advice to Congress on how to deal with the changes in the estate tax currently scheduled for the end of this year. I will accordingly end this post with my alternative. Index capital gains. Base capital gains on the original basis for an asset, whether or not it has been inherited in the meantime. Abolish the estate tax.

The result would still transfer money from private individuals to the government, which I regard as a bad thing although I presume Thaler does not. But it would at least do so in a consistent and coherent way.

Three wrongs don't make a right.

Saturday, November 06, 2010

Three Party Politics

Watching the election returns this week, it occurred to me that they were producing a potentially interesting situation—a three party House of Representatives. On paper, the Republicans have a majority. But that majority depends on the support of a substantial number of representatives who got nominated despite opposition from the Republican establishment, mostly with Tea Party support. They, like everybody else who reads the poll results, are aware that, unpopular as the Democratic party is with voters at the moment, the Republican party is only a little less unpopular.

Suppose the Tea Party representatives respond to that situation by forming their own caucus, as they well may, and functioning as an independent body, a virtual third party. In organizing the House, they will presumably sell their support to the orthodox Republicans in exchange for a positions for some of their members. But in future legislative struggles, matters may not be that simple.

Considered as a three party game, and ignoring divisions with the Democratic party, the logic is simple. Obama and the Democrats cannot pass legislation. Neither can the Republicans. Neither can the (hypothetical) Tea Party caucus. Neither can the Republicans allied with the Tea Party. But an alliance of Obama with either of the other two provides majorities in both houses; that plus the President's signature is, short of a successful filibuster, sufficient. And it is not entirely clear that the Republicans, absent the support of Tea Party senators, could mount a filibuster.

It is tempting, but boring, to assume that since the Tea Party is in some sense on the opposite side from the Democrats, no alliances are possible. Reason to reject that assumption is provided by recent British political history. The Liberal Democrats, by most views, were positioned to the left of a Labor party that had become increasingly centrist in the years since Maggie Thatcher's successful Tory rule. But when an election produced a majority for neither of the major parties, it was the Conservatives, not Labor, that they ended up in coalition with. So far, that coalition seems to working.

I admit, however, that I have had a hard time thinking up plausible issues on which the Tea Party and the Democrats might align against the Republicans. It would be easier if the Tea party were more consistently libertarian—one could imagine, for instance, an alliance to pass a federal medical marijuana law, supposing that it occurs to Obama that scaling back the War on Drugs is at this point a more popular policy than strengthening it. But that one strikes me as implausible and nothing else at the moment occurs to me.

Suggestions?

If the Web Had Come First

Thinking about the incident described in my previous post, it occurred to me that the web provides a much cleaner mechanism for dealing with issues of copyright and credit than print media, making it interesting to imagine how the relevant norms and laws might have developed if the web had come first and whether they can and should now be revised to fit the new technology. Consider the simple issue of quoting, in print, something someone else has written. Under current law, before doing so you must:

1. Decide whether your use is fair use; if not you require permission in order not to violate copyright law.

2. If it isn't, or might not be, fair use, you have to figure out who holds the copyright and how to contact him, not always easy, especially for something published some time ago and/or in a foreign country.

3. You then have to get permission from the copyright holder. In many cases, including the Cooks Source flap, the amount you would be willing to pay the copyright holder is small enough so that it probably isn't worth his time and trouble responding to your query and investigating to see just who you are and how you are likely to use his material.

4. Whether or not you conclude it is fair use, in order not to violate the norms against plagiarism you have to identify who you are quoting—sometimes easy but sometimes, especially if you are putting together a work that involves quoting lots of things from a lot of people, a good deal of trouble. It's particularly difficult if you are picking up something relevant to what you are writing not from the original source but from someone else quoting it—possibly with no attribution, possibly with a false attribution.

5. In order not to engage in what I described in my previous post as reverse plagiarism, attributing something to someone that he did not write, you have to trace the quote back to the original source to make sure you have it right. To observe how rarely people do so, try googling on ["David Friedman" "the direct use"]. Then download my Machinery of Freedom, which is what is being quoted, and do a quick search for "the direct use" to find the actual quote.

I have just done the experiment. The first eight hits had the quote wrong. The ninth was my webbed book.

6. In addition to any legal problems associated with quoting things, there is also the moral issue: are you unjustly making use of someone else's work to benefit you but not him?

All of which makes the reuse of other people's writing, a useful and productive activity, difficult and costly.

Now consider the same issue on the web. In my previous post, I provided my readers with the full text of two magazine articles and a Google Docs spreadsheet. Before doing so I obtained no permissions, made no effort to determine who held the copyright or deserved the credit for them, spent no time at all making sure I had the text right. None of that was necessary because, instead of quoting them, I linked to them.

Doing so also resolved any moral reservations I might have had about making use of the authors' work. They put their work up on the web in order that other people could read it. My links funneled readers to them, hence helped them to achieve the very objective for which they had written and webbed the pieces.

If the web had come first, issues of copyright and credit would have applied only to the rare case where someone chose to copy instead to link. Indeed, the relevant laws and norms might never have developed, since the very fact that what you were reading was a quote rather than a link, written by the quoter rather than the quotee, would be sufficient reason not to trust it.

The only difficulty I can see with applying this approach online today, linking instead of quoting, in order to work around the inconveniences of laws and norms developed in the context of print publication, is that you may want to quote only a part of what someone else has webbed. I am not sufficiently expert in HTML to know whether there is any convenient way of linking to a page in a way that will highlight the passage starting at character 583 of the target document and ending at character 912, in order to signal to the reader that that is the part you are quoting.

If there isn't, there should be.

What Do You Call Reverse Plagiarism?

There has been a considerable online furor recently over a magazine, Cooks Source, that is apparently in the habit of publishing material lifted from the internet, sometimes without credit, sometimes with credit but without permission. The controversy started when Monica Gaudio, the author of an SCA article on medieval apple pie, discovered that an edited version of her article had appeared, with her name but without her permission, in Cooks Source. She complained to the editor and got back an extraordinarily snarky email informing her that everything on the Internet is public domain—which is, of course, not true—and that she ought to be grateful for the editor's effort improving her article.

As the story spread, a considerable number of people spent time and effort going over back issues of Cooks Source to identify the sources of its material; there is now a Google docs spreadsheet up that provides a list of stolen articles. So far as I can tell, they didn't steal any of my medieval recipes; perhaps I should feel insulted.

The story raises an interesting terminological, and legal, question. Publishing something I wrote over someone else's name is plagiarism. What about publishing something I didn't write over my name? Monica's article was published over her name but had been edited without her permission, so some of what was published was not what she had written.

Putting aside the fact that publishing it without her permission, with or without credit, was a copyright violation, what was the legal status of attributing to her words she had not written?

Monday, November 01, 2010

Shrink=Trainer=Coach

When I was a graduate student at the University of Chicago, a very long time ago, it was common for undergraduate acquaintances to have, and talk about having, a shrink—a psychoanalyst. I never saw much evidence that psychoanalysis was improving their psyches to any significant degree, which led me to suspect that the real function of the shrink was to make the patient feel better, and perhaps more important, by paying attention to him or her. A friend who was getting his doctorate in psychology asked one of his professors what the evidence was that psychoanalysis worked, read the articles the professor suggested, and concluded that the evidence was that it didn't; I take that as at least mild support for my interpretation of the role of the shrink.

So far as I can tell by very casual observation, the shrink has pretty much vanished from that particular role, being replaced by the trainer, aka coach, someone hired to provide advice to his client on how to live his life. As best I can tell, being hired for that job does not, in practice, require any evidence of expertise in living one's own life.

Nor does it require the eight years of medical school plus residency that were the entry requirements for becoming a shrink, but not truly essential for the job of making clients feel as though someone is paying attention to them. It's nice to see progress in the world.