Choosing Religion 

I saw Religulous this weekend. It was an interesting film with a couple of good underlying themes: Be wary of those peddling certainty; doubt is more rational and less destructive. Also, humanity's greatest challenge is to learn to stop wishing for death (end times/rapture/judgment day) before we bring it upon ourselves (nuclear destruction, environmental destruction, etc.). Bill Maher was more dismissive than I liked of some of the points brought up by the people he interviewed, though, and strangely inconsistent in how confrontational and aggressive he was with different interviewees. I don't want to make this a movie review, however, so much as a discussion of the train of thought I followed as a result of seeing the movie.

Consider three religions, A, B, and C. You may think of them as Judaism, Christianity, and Islam, or perhaps Catholicism, Mormonism, and Atheism (yes, I'm counting it as a religion), or any three religions in whatever order you choose. The only thing that matters is that each religion, A, B, and C, teaches that their faith is real and true and all others are false and anywhere from misguided to actually evil. More importantly, these religions teach that bad things will happen to members of other faiths as a direct result of being of the "wrong" faith (e.g. missing out on the Rapture, burning in Hell, etc.). If your faith fits that description, feel free to think of A as your faith. For the purpose of discussion, we'll assume that A turns out to be correct; some time in the future, believers in A will be rewarded for their belief and all others will be punished.

Now consider a person born and raised in religion B; let's call him Fred. There are three relevant for Fred: remaining in B, joining A, or joining C. Only one of these is the correct choice, and making that choice randomly gives worse than even odds of success (and his odds are far, far worse when we don't limit the available choices to just those three religions but to all those in the real world). But on what (non-random) basis can Fred make that choice? Having been born and raised in B, the easiest and likely most comfortable decision is to remain in B, but that is the wrong choice.

Even if he rejects the religion in which he was raised, as many people do, he still has to choose between A and C. He is again faced with a random choice and poor odds (and again, those odds are far, far worse in the real world). Based only on probabilities and random choice, changing faiths gives no better odds of getting Fred to the correct religion than staying in his current religion. (For N religions where only one is correct, he starts with a 1/N chance of being in the correct religion. If he is already in the correct religion (1/N) then he is guaranteed to be in one of the incorrect religions if he switches. If he is in an incorrect religion ((N-1)/N) then switching gives him a 1/(N-1) chance of switching to the correct religion. This adds up to ((N-1)/N) * (1/(N-1)) + 1/N * 0, which reduces to 1/N.)

There seems to be no way to improve Fred's odds of choosing one faith over another. Only the one setting up the scenario (us, in Fred's case, or God in a world in which there is a God, which may or may not correspond to our own) can genuinely know which religion is correct. Some people trust their heart, or a voice in their head, or some other internal manifestation that they believe to be God communicating this truth, but since those internal manifestations don't bring all those who follow it to just one religion, it doesn't seem to be a reliable way to choose the right one. Likewise, people who claim to have received this communication and try to lead people to a particular faith are no more reliable since they do not all lead to the same faith. You might be able to improve your odds by limited your choices to one of the faiths that people who listen to an internal manifestation go to, but that requires the assumption that the correct religion is in that group, and there is no particular reason to believe that.

Miracles are sometimes considered a way to verify that one's religion is correct, but it would require directly experiencing such a miracle, rather than just hearing about it, to give it weight. It would also require directly experiencing it in a group, where everyone in the group shared the same experience. Furthermore, the miracle must be sufficiently miraculous, i.e. extraordinary. In Religulous, one gentleman tells of a miracle he experienced in which he asked for a glass of water and was sarcastically told to hold it out the window an pray for rain. He did so, and it began pouring rain. He judged it a miracle, whereas Bill Maher considered it a fascinating coincidence but questioned whether it was really a miracle. A rainstorm is just too common to be a miracle, and a downpour on demand is only miraculous, rather than coincidental, if it is dependably repeatable.

Even what seem like genuine miracles happens with some regularity at the hands of man without providing evidence for one religion over another. People are brought back to life after being dead, for example, in hospitals around the world. People fly through the sky (airplanes). People instantly see and hear things that are occurring far away from them (television, radio, telephones, etc.).

Ultimately, whatever one believes, one must live with the fact that one might be wrong and is, in fact, very likely to be wrong. Certitude is just self delusion. Without certainty in a religion, one must deliberately choose a set of beliefs, goals, and behaviors based on the same basic things that lead to religion: fear of unpleasant consequences and desire for pleasant consequences. Given that the odds are strongly against choosing the set of beliefs, goals, and actions that will lead to pleasant consequences (that one will actually experience) beyond one's lifespan and beyond the world one experiences directly, one can only reasonably choose to avoid unpleasant consequences and seek pleasant consequences in the foreseeable future. Religious certitude can only distract from that.


Entertaining the American consumer 

This will be a bit different from other posts in this blog in that it isn't about teasing apart some issue that has become buried by rhetoric on two or more sides and is, in fact, an indulgence in punditry. Please bear with me.

The issue here is why, as discussed in this NYTimes article, overall movie attendance has been declining. The article suggests a few possible causes, including lack of high quality movies (very subjective), gas prices, alternate entertainment options (e.g. video games), and the declining quality of the experience (e.g. commercials). I'd suggest that, while these individual factors are part of it, what we are seeing is two fundamental but gradual shifts in consumer behavior.

The first shift is an increased expectation of convenience in one's entertainment. This comes from services like TiVo and NetFlix that make "There's nothing worth watching on TV right now" a thing of the past. There is always something that is not only worth watching but, with TiVo or NetFlix, is also something that we decided beforehand we wanted to watch. Contrast the convenience and comfort of one's own den and the (relative) certainty that we will be entertained with the moviegoing experience of finding parking, lines, commercials, and rude theater patrons (with their loudly whispered conversations, cellphones, seat-kicking ways, and crying/screaming children). The benefits of a theater at its best include a large, high resolution (yes, even film has a resolution) picture, high fidelity sound, seeing a feature sooner rather than later, and a shared experience (particularly relevant for comedy and horror/thriller). Of course, the home theater at its best is sufficient picture and sound quality for many people.

The second shift is the invisible elephant in the previous paragraph: economic factors. We'll look at several example entertainment-seeking couples and compare their entertainment choices and costs, using plausible but unscientific numbers. We'll assume that all of them have a decent television and stereo and some kind of television service (i.e. cable or satellite).

Consider a frugal and lucky moviegoing couple, whom we'll call the Smiths. Through one discount program or another (e.g. AAA), they can buy discount movie tickets for $6 each. They are lucky in that their local movie theater, which is a five minute walk away, doesn't enforce discount ticket restrictions like not being allowed for a movie's opening weekend. They also don't feel the need to buy popcorn, drinks, or anything else at the theater, so they just walk in, buy their tickets, and watch the movie. Total cost of the Smiths' movie entertainment for the evening: $12.

We'll also consider the Sullivans, who drive their 15mpg SUV five miles to the cinema, pay the full $9.50 ticket price, and typically spend $5 on snacks for the movie. At $3 per gallon of gas (not yet around here, but it's coming), their moviegoing experience costs them $25.

Another couple, whom we'll call the Parkers, have the cheapest NetFlix subscription ($9.99/month, 1 DVD out at a time). If they only watch one DVD a month, the cost of the Parkers' movie entertainment for the evening is still cheaper than the Smiths', at $9.99.

The Jacksons, on the other hand, prefer to own movies. They don't insist on buying a movie right when it comes out and they watch for deals, so they pay an average of $17 per DVD.

The Hendersons have a TiVo and some premium cable movie channel(s) (e.g. HBO, Showtime, Starz). Their monthly TiVo service costs $12.95, and their premium cable service is anywhere from $10 to $30 or more per month depending on which channels and how many of them. (Their friends, the Walkers, are happy to watch the older movies on TCM and AMC or plain old TV shows on their TiVo.) If they watch only one movie a month and they have a $32.05 (to make the numbers easy) premium channel package, their evening's movie-watching costs $45. In summary (rounded up to the nearest dollar):

EntertainmentCost per movieCost per month

SmithsCinema (frugal)$12
ParkersNetFlix $10
JacksonsDVD purchase$17
HendersonsTiVo + premium channels $45
WalkersTiVo $13

We see that the Parkers are saving the most money here, but it's more interesting than that. Let us assume that our entertainment-seeking couples are interested in taking in a movie every weekend rather than just once a month. Conservatively, there are four weekends in a month, so the Smiths spend $48 for their monthly movie-watching. Of course, the Parkers' and Hendersons' costs haven't changed. The Hendersons are spending $3 less than the Smiths, and the Parkers are spending a whopping $38 less every month. The Jacksons are paying $80 per month, but at the end of the month they own four DVDs. The benighted Sullivans, however, are spending $100 a month on movies. Is the moviegoing experience, with its big screen, high fidelity sound, and arguably positive shared experience, worth 5 or 9 times more than watching a DVD in the comfort of one's home? Is it worth almost 50% more to see a movie a single time in a theater than it costs to own it on DVD? The cost differences get even wider when we add another family member; only the Smiths and Sullivans pay more, and they are already paying more per month than the other couples (except the Jacksons).

Of course, there are some things missing from these comparisons. There are differences in timeliness, for example, in that the Smiths and Sullivans will have access to movies before the Parkers and Jacksons, who may or may not have access to them before the Hendersons.

There is also the matter of breadth of available choice. The Smiths and Sullivans choose from a fairly narrow array of movies currently playing at their local theater. The Hendersons choose from a less narrow array of movies playing on their premium channels, depending on how many of them they have. The Walkers choose from a wide range of old films, plus television programs, but they will never have uncut access to anything rated higher than PG or maybe PG-13 (exceptions include Comedy Central's Secret Stash). The Parkers and the Jacksons have the broadest array of choices, including recent and older movies and TV series that have been released on DVD.

Probably the biggest missing consideration is the difference in quality. The Smiths and Sullivans will experience a higher quality picture and higher quality sound than the Parkers and Jacksons, who will experience higher quality than the Hendersons and Walkers (though the Hendersons might get better quality with HDTV premium channels and a PVR capable of recording HDTV quality).

The point isn't that going to the movies costs too much, though it does. It isn't that NetFlix is cost-effective and appealing, though it is. The point is that the perceived benefits of going out to see a movie no longer outweigh the costs (monetary, opportunity, and inconvenience) for many people. Many people's perception of the benefits of going to the movies has dwindled, largely due to annoyances like commercials and an arguably greater share of lousy films. Ticket and snack prices, not to mention gas prices, have driven up the cost of going to the movies. The opportunity cost of skipping the movie theater in favor of watching something at home has been lowered by the availability of high-quality home theater systems, thus raising the opportunity cost of going to the theater. The inconvenience of leaving one's home, not to mention dealing with lines and other patrons' rudeness, hasn't changed much; many people's perception of inconvenience, however, has increased as they become more used to being entertained at home by television they actually want to watch (thanks to TiVo), DVDs, and video games.

I don't think the movie industry is doomed, nor do I think the movie theater market is drying up. Hollywood and movie theaters face greater competition than they ever have, however, and they will have to rise to the occasion and start improving the moviegoing experience. Being able to purchase tickets online and at kiosks helps. Improving the quality of movies and cutting down on boringly formulaic (Stealth, perhaps?) or ill-conceived (Deuce Bigalow, anyone?) films will help. Getting rid of those infuriating commercials will help. Going to the movies can compete with home entertainment again, but it will require some changes.


Public Education 

Yes, there are many things wrong with the U.S. public education system. Yes, it's complicated. No, there isn't a simple solution. That said, this is about just one small aspect of public education: whether and what to teach concerning Creationism.

To some, the Darwinian theory of evolution is blasphemous, denying God as the Creator. To others, Creationism is superstitious and anti-scientific drivel, dogmatically explaining natural phenomena with divine magic rather than conclusions drawn from observation and study. The fact is, either perspective is accurate if you accept the underlying premise, the existence of a divine Creator or the lack thereof. When we disagree on a basic premise, how do we find common ground on what to teach students?

Rebranding Creationism as Intelligent Design may seem like just playing word games, but it's a bit more than that; it is the best hope for agreement on what to teach students about the origin of life. Truly hardline fundamentalists who will not accept that world is more than 4000 years old will not be able to reconcile their worldview with the idea of early cellular life taking eons to develop into the species we know today, nor will the idea of any kind of divinity sit well with hardline atheists, but anyone in between those two extremes should be able to reach some agreement on what to teach.

There are two parts to teaching about the origins of life. The first is what we actually know from observation, experiment, and study of resources such as the fossil record. We know of the existence of a huge variety of species from simple observation. We know about Mendelian genetics from his experiments. We know about extinct species and worldwide extinction events from the fossil record. The second part is the conclusions we draw from what we know. Given our primary factual knowledge plus the premise that God created the world, Man, and everything else, we arrive at Intelligent Design. Given our primary factual knowledge plus the premise that there was no divine intervention in the development of life, we arrive at Darwinism.

If we can manage to teach the primary factual knowledge about the origins of life separately from the conclusions we draw from that knowledge, we can teach the difference between Intelligent Design and Darwinism as the choice of premise. Darwinism relies on random events to explain how evolution, a phenomenon we can observe with our own eyes at the present time, produced the various life forms we see today. Intelligent Design replaces that randomness with the Hand of God. Whether evolution was guided by a divine Creator or natural randomness is a question for theologians and philosophers, not educators.

Of course, I skipped over something. The debate has always been between Creationism and evolution, hasn't it? What I'm talking about accepts evolution as a given, and only quibbles about how the mechanism of evolution has produced what it has. Like many other debates, this has become a battle because of (deliberate or otherwise) poisoning of words. Evolution has come to refer to the theory of why life developed the way it did, whereas scientific theories explain how natural phenomena occur. Evolution as mechanism rather than evolution as etiology is something everyone can agree on as an observable phenomenon.

You can't please all of the people all of the time, but you can maximize the number of people you please. Sure, teach Intelligent Design in public schools as part of teaching about biology and the origins of life. Explain that it is based on a premise that may be impossible to prove or disprove. Teach Darwinism the same way. The mechanism of evolution, however, and the knowledge we have accumulated through observation, experiment, and study should be taught as fact. The hardliners on both sides will be unhappy, but the rest of us can be satisfied that our children will get a better education with a better distinction between what we know and what we choose to believe about it.


Site changes! 

The main changes are a site feed and comments. The site feed is provided by FeedBurner, which takes the Atom feed provided by BlogSpot/Blogger and converts it to RSS as necessary, based on the needs of the browser requesting it. Kind of nifty. As for comments, I've sort of wanted to allow comments for quite a while, but I haven't bothered to look into how to enable them. Well, they are now enabled. Go wild, but please try to stay on topic.

The only other real change is that my stylesheet is now inlined. This is less because I think it's a great way to deal with it and more because I am losing my old web space (where the external stylesheet had been) and haven't bothered to set up new space. I wish there were a way to store it at BlogSpot as a separate file, but that doesn't seem to be supported.


Clear lines, gray areas, and the law 

In the U.S., the executive branch of the government is responsible for enforcing the law, among other things. Any ambiguity in the law gives power to the enforcers since it gives them greater opportunity to use their individual judgment on whether a particular action has violated a law. The purpose of checks and balances in our three branch government is to prevent that kind of power from slipping from one branch (the judiciary) to another (the executive). The legislative branch is responsible for developing the laws, but ambiguous laws are routinely struck down by the judiciary for being too ambiguous. (Granted, this usually has more to do with laws being ambiguous in such a way that because citizens can't be sure if what they want to say violates the law they instead say nothing, and it is considered prior restraint of speech and therefore in violation of the first amendment.) The point of this paragraph, however, is that ambiguity in laws leads to trouble, and it is the responsibility of the legislative and judicial branches to avoid such ambiguity.

We, as citizens, are bound by law. We, as people, have some idea of what those laws should be based on our upbringing, philosophical beliefs, social environment, and religious beliefs (all of which overlap significantly). Where the overwhelming majority of us agree, the laws are easy; most everyone considers premeditated murder to be about the worst thing one can do, and thus first degree murder is illegal everywhere in the U.S. and generally carries a sentence of life in prison or even death. Where there is weaker agreement (especially historically), things are fuzzier; rape is considered by many to be as bad as first degree murder, but there are a lot of people out there who feel that the victims are somehow partly responsible, or even brought it on themselves, thus the penalty for rape varies from state to state and court to court.

What rape and murder laws have in common (and, indeed, the majority of the laws of the land) is that they are about protecting people from other people. This is one of the roles of the government. (Perhaps I'll talk about the necessary/useful/desirable roles of the government in another post some time, but I'll have to get all my reference ducks in a row first.) If one accepts that the law should protect people from one another, specific laws come down to an argument about one of two things: who counts as a person, and what actions should be protected against. The second issue is as worthy of debate as the first, but is a topic for another time.

Various groups of what we now consider people in various times and places have been considered non-people and, thus, have not been afforded protection from people. The more memorable examples include Jews (and others suspected of heresy) during the Spanish Inquisition, Jews under the Nazi regime, and slaves in the U.S. before the 13th amendment (and, arguably, their descendents up to and including the present). At present, there is even a vocal minority (e.g. PETA) who consider animals (or at least most mammals) to be people and thus deserving of the same protection under law.

So what makes someone a person? Is it that they are of the species homo sapiens? That seems to be a good definition, but it is insufficient if we ever encounter intelligent extraterrestrial life. Furthermore, it doesn't take into account our prevailing attitude that children are afforded different rights and protections from adults, as are the mentally incompetent. This, of course, is of those gray areas. The law, as developed by the legislature and interpreted by the judiciary, draws solid lines. Those solid lines are not intended to be "right" in some objective sense. In general, when a solid line is drawn in a gray area it is because there is no obvious right place to draw the line; the choice of where that line is drawn must not fall to the individual enforcers of the law (in the executive branch) but to the developers and interpreters of the law (legislative and judicial branches, respectively) so that it applies to everyone consistently. It is for this reason that the Roe v. Wade decision and subsequent legislation have drawn the line between "people protected from murder" and "non-people with no such protection" at birth.

The people of the United States do not agree on when a human life becomes a person. Pretty much everyone agrees that sperm and unfertilized eggs do not constitute a person, Pythonesque claims of "Every Sperm is Sacred" notwithstanding. Pretty much everyone agrees that when a child begins using language, he or she is a person. There are newborn infants left in dumpsters to die, abortions performed in all three trimesters, and emergency contraception (the morning after pill) throughout the U.S., indicating that there are people who believe that a fetus is a non-person from conception up to and beyond birth. At the same time, there are people who protest abortion clinics and demand that Roe v. Wade should be overturned so that any sort of artificial termination of a pregnancy, including emergency contraception and wounding a pregnant woman so that she loses the baby, should be considered murder. There is no strong agreement here, nor is there likely to be in the forseeable future. Without the presupposition of a divine answer, which is well outside the domain of the law, there can be no "right" line to draw.

Drawing the line at birth is a compromise. With the (comparitively) recent ban on "partial birth" abortions, the compromise has even been shifted. (Note that the legislation itself is flawed in that it is ambiguous about the breadth of what it actually bans, but that has been discussed thoroughly elsewhere.) One could argue that this shift is a move toward giving in to one group's opinions at the expense of another's, or one could argue that it is simply a balancing reaction to the weight of the pulls on either end of the spectrum. Nonetheless, the purpose of that solid line is not to protect babies from being murdered, nor to provide a woman with control over her own body, but to prevent the individual enforcers of the law from deciding on a case-by-case basis whether a woman is committing murder or legally discarding a part of her body.

What can be gained from understanding the discussion above? Those arguing vehemently to shift the line one way or the other, or even to prevent the line from being shifted, should be aware that the reason for the line is not to help or hurt their cause. Furthermore, the apparent hypocrisy of being against abortion and for capital punishment, or vice versa, is not hyposcrisy at all; it is just a difference of opinion on who counts as a person and who does not. Are blastocysts people? Some people say yes, some say no. Are murderers people? Some say yes, some say no. As righteous as many people feel about their opinions, the law must not be based on religious beliefs and, therefore, must draw arbitrary lines in gray areas that satisfy the majority of the people at best, and dissatisfy equally at worst.


There is no terrorist nation 

The current election circus has brought to light some fundamental misunderstandings about terrorism. Not only do these misunderstandings result in debate and rhetoric based on faulty premises, they mislead the nation and the world into believing that a "war on terror" can be fought and won.

First of all, it seems that Al Quaeda has become synonymous with terrorism, which is hardly accurate. There are terrorist organizations throughout the world, including Hamas in Israel, the Irish Republican Army (IRA) in Ireland, the Earth Liberation Front (ELF) in the U.S.A. (which has connections to PETA, incidentally), the Tamil Tigers in Sri Lanka, etc. (I recommend a Google search on any of these organizations with which you were not previously familiar.)

Second, there seems to be a belief that terrorists are members of some kind of terror nation. I don't mean that people believe that there is some Terroristan from which all terror stems, but there is this belief that if we can just kill all the terrorists then there will be no more terrorism. This is patently absurd. Terrorists are ordinary people who find themselves in an unacceptable position and use terror as a way to fight for a better life. Whether they are justified, or "right," or heroes, or positive in any way is partly a matter of perspective and partly a big maybe that depends very much on the specific circumstances.

There is a sort of spectrum of methods of fighting the prevailing system of government and authority, and where particular activities lie on that spectrum is largely a matter of its magnitude and level of success. On one end of the spectrum is the troublemaker, people like John Dillinger (who burned mortgage records when he robbed banks during the Depression, which freed many people from financial ruin). On the other end are successful revolutionaries, like the French or Russian revolutions. Somewhere in between lie terrorism, guerilla warfare, and the like.

Whether a particular group's actions are positive or negative has a lot to do with perspective. To the British, the Boston Tea Party was a terrorist act. To many people in Britain's American colonies, it was an act of revolution. It isn't so far-fetched to paint the Iraqi insurgents as revolutionaries fighting off imperial colonialism, yet they are called terrorists.

To be clear, I'm not saying that blowing things up and killing people is to be praised. I want to point out, however, that terrorism and guerilla warfare are tools for fighting government and authority that have been used throughout history. The U.S.A. has even funded "terrorists" when it seemed to suit our purposes, i.e. they were terrorizing our enemies. Particularly egregious is the United States' support for Osama Bin Laden when he was terrorizing the Soviets in Afghanistan.

The U.S. government does need to fight terrorism, but largely because it is the prevailing system of government and authority. Presumably, the citizens of the United States like that prevailing system (at least, they like it more than the alternatives), and are therefore in favor of fighting terrorism. Nonetheless, it is not a war and there is no front to which we can send troops. Each nation must protect itself. Each nation must be responsible for its homeland security. And nations must cooperate to defuse terrorist plans before they come to fruition. The reason we can't "win the war on terror" is because it is not a war. We need intelligence, law enforcement, and diplomacy to protect ourselves, not soldiers and weapons.


Programming for fun and profit 

It's time for a discussion of software engineering or, rather, software engineers. This should be entirely accessible to those who are not software engineers or, indeed, so-called computer people. The issue at hand is why one would or should program a computer.

First, why does anyone program? In general, programming is done for fun, for convenience, and/or for a paycheck. (Incidentally, when I say "for convenience" I mean programming to simplify some task that could be done without the need for programming where the requisite programming is not central to the task at hand.) All competent and experienced programmers are in it, at least in part, for fun. I would claim, in fact, that one cannot become a good programmer without finding enjoyment in it.

That's hardly an earth-shattering revelation, since the best workers in any field are those with a passion for the particular work. Nonetheless, an astonishing number of people have received degrees in computer science and pursued a programming career without any love for the work. Now that the whole dot-com boom has busted there are fewer going that route (many who otherwise would are now heading for biotech), but there are lots of mediocre coding grunts out there already.

Why follow a career path you don't enjoy? Why struggle at something you don't love and can't get especially good at? I knew dozens of people like this when I was an undergrad, and I know many others now. I'm even friends with some of them. I just don't understand them.

In some sense, I suppose I'm bemoaning the Peter Principle; these people have just reached their level of incompetence early in their careers. I am, however, of the opinion that being good at something one enjoys is worth far more than being mediocre at something one does not enjoy, even if the less pleasant job pays more. If you are a good retail manager, and you enjoy the work, why strive to be a mediocre programmer and hate your job?

Even worse, however, are those who really do enjoy programming, but aren't any good at it. Let me amend that. I really mean those who enjoy programming, aren't any good at it, and are a burden while they try (and possibly fail) to get better at it. They submit terrible patches to open source projects (or, worse yet, start their own). They waste half an hour (or an hour, or three hours) of a competent programmer's time with newbie questions to save an hour (or two hours or five hours) of their own looking at documentation or other references (and I don't mean posting to a mailing list or newsgroup, where anyone who spends time on it does so voluntarily, I mean direct interaction with a specific person over email, IM, phone, or in person). They write bad code and expect it to be integrated into that project at work with no further work on their part.

I'd like to believe that the people who hate what they do, programming included, can learn to like it. I'd like to believe that incompetence is a passing phase for those who do enjoy their jobs but aren't good at them. I'm just having a little trouble believing either.

What it really comes down to is how and when one chooses a career. When I chose to major in computer science in college it was because, upon looking back at my high school career, the only subjects I cared about and worked hard at were computer science and chemistry. (I very nearly chose chemical engineering, but it was more convenient to take CS courses my first semester and I found that I enjoyed them tremendously.) Looking back even farther, I had a chemistry set at an early age and I learned to program BASIC with my father at age six. If I had chosen some largely unrelated major, say biology pre-med, I might have made it into med school and gone on to be a doctor. Of course, I probably wouldn't be happy about it, even though I'd be making lots more money than I am now.

As with most most life-defining decisions, choosing a career requires some serious self-knowledge. If you don't know what you want out of life, and what you enjoy in life, it is hard to make a career decision that will take you there.

This page is powered by Blogger. Isn't yours?