Monday, April 30, 2007

Insect colonies as prototypical dynamic entities

I can't get over how much I like David Sloan Wilson's Evolution for Everyone. (Click here for all my posts about this book.) I just finished a couple of chapters in which he talks about insect colonies as organisms. More generally, he talks about all organisms as groups of individuals, from bacteria on up. At the start of Chapter 21 he writes,
Individuals are not just members of groups but groups in their own right. We call them individuals because they have solved the problems of within-group conflict so well.
In my theory of entities (see "Putting Complex Systems to Work," especially sections 4 and 7), I talk about static and dynamic entities. Insect colonies are the prototypical dynamic entity. They depend on extracting energy from their environment to persist, and they consist of components that change over time even though the entity as a whole persists as an entity.

An interesting difference between insect colonies and dynamic entities in human societies is that in insect colonies the components — the individual insects — remain in a single colony all their lives. (This is not completely true. Some colonies divide when they get too big. But insects apparently don't participate in multiple groups simultaneously. At least they are serially group monogamous.) In human dynamic entities, the components (people) participate in multiple dynamic entities simultaneously. These include their family, social clubs, the companies they work for, teams responsible for projects they are working on, etc.

In "The Emergence of Cells During the Origin of Life," an award-winning essay published in Science last December, Irene Chen reports on an experiment that supports Wilson's perspective. In her experiment she demonstrates that RNA and cell membrane material, even though they are capable of replicating independently, are more successful when a membrane encapsulates the RNA than when they are separate. This could be the mechanism that led to the first cells.

Sunday, April 29, 2007

Filling in the reductionist blind spot

The chicken breeding experiment is a few years old, but it has not been widely discussed. Here's how David Sloan Wilson describes it. (Click here for all my posts about his Evolution for Everyone.)
Bill [Muir] wanted to increase egg production by selective breeding, and he tried to do it in two ways. The first method involved selecting the most productive hen from each of a number of cages [each of which contained 9 hens] to breed the next generation of hens. The second method involved selecting all the hens from the most productive cages to breed the next generation of hens. You might think that the difference between the two methods is slight and that the first method should work better. After all, it is individuals who lay eggs, so selecting the best individuals directly should be more efficient than selecting the best groups, which might include some individual duds. The results told a completely different story. …

[Using the first method Bill had effectively] selected the meanest hens in each cage, and after six generations had produced a nation of psychopaths. Inside the cage were only three hens, not nine, because the other six hens had been murdered. The three survivors had plucked each other during their incessant attacks and were now nearly featherless. Egg production plummeted. …

[Using the second method] the cage contained all nine hens, plump and fully feathered, and judging from their expressions they seemed to be having a good time! Egg production had increased dramatically during the course of this experiment.
In a note in the chapter from which this was taken, Wilson refers to his recent article "Human groups as adaptive units: toward a permanent consensus." Here's the introduction.
Foundational changes are taking place in our understanding of human groups. For decades, the biological and social sciences have been dominated by a form of individualism that renders groups as nothing more than collections of self-interested individuals. Now groups themselves are being interpreted as adaptive units, organisms in their own right, in which individuals play supportive roles.

Let me be the first to acknowledge that this new conception of groups is not really
new. A long view of scientific and intellectual history reveals that the last few decades have been an exception to the rule. The founding fathers of the human social sciences spoke about groups as organisms as if it were common sense (Wegner 1986). Before them, philosophers and religious believers employed the metaphor of society as organism back to the dawn of recorded history.

Far from robbing recent developments of their novelty, this pedigree only deepens the mystery. How is it possible for one conception of groups to be common sense for so long, for a radically different conception to become common sense, and then for the earlier version to experience a revival? A superficial answer is that ideas are like pendulums that swing back and forth. On the contrary, I believe that the organismic concept of groups will become permanently established, in the same sense that the theory of evolution has become permanently established, even if there will always be a frontier of controversy. In this paper I will attempt to show how the ingredients for a permanent consensus are already at hand.
My explanation is the same as my explanation of entities (see, for example, sections 4 and 7 of "Putting Complex Systems to Work"), and the same as my explanation of the way in which reductionism is mis-used. Reductionism tends to be misused when we pretend that by explaining how an entity functions we can dismiss the existence of the entity as an element of nature. That is, just because we (human beings) are bio-mechanical devices doesn't mean that human beings as entities can be done away with. For those familiar with computer science this is similar to saying that even though every level of abstraction can be explained by its implementation and every object can be explained in terms of computer instructions, that doesn't mean that the level of abstraction or the object itself can be ignored. I have referred to the tendency to dismiss the existence of an entity once one understands how it works as the reductionist blind spot.

Clearly the same is true of groups. Just because groups are made up of individuals doesn't mean that groups as entities can be ignored. They must be understood as having their own existence. A prediction one would make is that groups that persist as dynamic entities (i.e., as entities that persist by extracting energy from their environment) must be taken into account but groups that exist simply by virtue of being aggregations of components don't have to be taken into account.

Wilson is very much a roll-up-your-sleeves-and-get-to-work scientist. In that spirit, here's how I would test the theory I just hypothesized. Chickens in a cage, I would predict, develop a group dynamic. Even though they didn't choose to form a group, the fact that they were confined together led them to behave as a group. It was the group, then, that evolved. If one conducted the same experiment with animals that didn't interact, i.e., that didn't develop a group dynamic, I would predict that the experiment would turn out differently. Selecting the best animal in each cage would be at least as effective and probably more effective than selecting the best cage to generate the next generation.

In other words, breeding at the group level succeeds only when the groups that are being bred are real entities and not just arbitrary collections of individuals.

To complete the thought, one has to understand how groups work. In the case of chickens (and other animals) a group is a level of abstraction that is created by how the individuals interact. Clearly one doesn't breed groups. There are no group genes. But in breeding individuals one breeds the way the individuals interact when in a group. In the experiment above, the first approach bred individuals who formed dysfunctional groups. The second approach bred individuals who formed groups that functioned well. If one were breeding animals that operated purely as solitary individuals, no matter how one bred them, no group would form and there would be no group effect.

It's important to realize that there are no ant colony genes as such. The functioning of the colony depends on how the individual ants operate. But in fact, the way the individual ants operate creates a colony-level effect. So by evolving the ants to interact in a particular way, one is creating what might otherwise be referred to as the rules of the colony, i.e., the rules that the ants obey in their interactions, which in turn determine how the colony functions.

[A historical review of the recent debate about what is now called multi-level selection is available in this paper by Samir Okasha. He has just published Evolution and the Levels of Selection, which is apparently a fairly technical treatment of the subject.]

Colony-level and similar phenomena are sometimes referred to as emergence as if they were a somehow magical. Clearly, once one sees how the rules by which the ants operate produces the colony level effect, it is no longer magical. I discuss this in "Putting Complex Systems to Work" as well as in the earlier "Emergence Explained," which is available in draft form here.

See also "Group selection and merit pay at the CSU" and "Simulating group selection."

Edward Luttwak on the Middle East

Bob Weber forwarded this article by Edward Luttwak.
… Global dependence on middle eastern oil is declining: today the region produces under 30 per cent of the world's crude oil, compared to almost 40 per cent in 1974-75. In 2005 17 per cent of American oil imports came from the Gulf, compared to 28 per cent in 1975 …

We devote far too much attention to the middle east, a mostly stagnant region where almost nothing is created in science or the art, and, excluding Israel, per capita patent production … is one fifth that of sub-Saharan Africa. The people of the middle east (only about five per cent of the world's population) are remarkably unproductive, with a high proportion not in the labour force at all. …

The middle east was once the world's most advanced region, but these days its biggest industries are extravagant consumption and the venting of resentment. According to the UN's 2004 Arab human development report, the region boasts the second lowest adult literacy rate in the world (after sub-Saharan Africa) at just 63 per cent. … Moreover, despite its oil wealth, the entire middle east generated under 4 per cent of global GDP in 2006—less than Germany.

Unless compelled by immediate danger, we should therefore focus on the old and new lands of creation in Europe and America, in India, and east Asia—places where hard-working populations are looking ahead instead of dreaming of the past.

The terms of the political debate have changed

Instead of "How do we win the war?" the premise has become "Bush's strategy has failed." The question to be answered becomes "What do we do now?" Here is retired Lt. Gen William Odom's remarks delivered as the Democrats weekly radio address. Here is an article by (active duty) Lt. Col. Paul Yingling, which criticizes the military for not speaking out earlier.

Friday, April 27, 2007

Change is about density, not acceleration

From an article about Moore's Law
'Moore's Law is frequently misquoted, and frequently misrepresented,' noted [Brian] Gammage. While most people believe it means that you double the speed and the power of processors every 18 to 24 months, that notion is in fact wrong, Gammage said. 'Moore's Law is all about … the density of … transistors, and not what we choose to do with [them].'

The actual law—which didn't morph into a governing mandate until [Carver] Mead described it as such—was first summarized by a young Gordon Moore in an issue of Electronics Magazine in 1965, who said:

'The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.'
The notion that what increases is density is more general than Moore's Law. We frequently talk about accelerating rates of change. I don't believe that's true. Change can't happen faster than people can invent. And we are doing that as fast as we can. What can accelerate is the density of change. More and more areas of our lives are subject to change, which makes it feel like change is occurring faster and faster.

Thursday, April 26, 2007

David Sloan Wilson's new book

I just got a copy of David Sloan Wilson's Evolution for Everyone. (See also my earlier post on Natalie Angier's review. Click here for all my posts about this book.) It's truly charming. He explains evolution as follows (paraphrased a bit).
Imagine a population of moths that vary in their coloration. Some are more easily spotted and eaten by predators. The offspring of the survivors resemble their parents, so the offspring generation is [harder for the predators to see] than the parental generation. If we repeat the process over many generations, the moths will become very hard for the predators to spot.

Is that all there is? Just about. Learning about natural selection is like having a premature orgasm. You think it will take a long time and lead to a tremendous climax, but then it's over almost as soon as it began!

The main question about natural selection is not "What is it?" but "Why is it such a big deal?"
He goes on to answer that question.
Suppose you were asked to explain how some object obtained its properties. Before Darwin you would have had only two options. You could say that God designed it according to His intentions. Or you could dismantle it and explain the whole as a product of its parts. The big deal about evolution is that it provides a third way of explaining the properties of an object.

Suppose that I place a clay sculpture in your hand and asked you how it obtained it properties. You would spend most of your time talking about the shape imposed by the artist, not the properties of the clay. The clay permits but does not cause the properties of the sculpture. In just the same way, heritable variation turns organisms into a kind of living clay that can be molded by natural selection.
In other words dynamic (as distinguished from static) entities, which must extract energy from their environment to persist, must adapt to their environment when the environment changes — or cease to exist. (See sections 4 and 7 of "Putting Complex Systems to Work" for more on my theory of entities.)

Wednesday, April 25, 2007

Is computer science just modeling?

The recent discussion of abstraction has been very interesting for me. I seem to have talked myself into the position that what we are (or should be) teaching when we teach computer science is modeling — where modeling refers to both structure and behavior. Almost all software is either (a) a model of something (Turing conceptualized a Turing Machine as a model of human computation, hence even software that implements an algorithm is a model) or (b) a tool (or other framework) to help build models. Abstraction is the process of stepping back from the act of modeling and building tools that capture the processes and structures we use to build models. That seems both right and too simple.

I have claimed that computation is really externalized thought. (See section 5 of "Putting Complex Systems to Work" as well as "If a Tree Casts a Shadow is it Telling the Time?" A revised edition will appear in an upcoming issue of the International Journal of Unconventional Computation.) Thought in some reasonable sense can be said to be the development of (mental) models. So if software is externalized thought it makes sense to say that computer science is really about modeling and the abstractions that underlie and enable it.

If as suggested earlier, modeling is the representation of something in a well-defined language, then software is the representation of the subject matter of a mental model in a particular computer language. Even an algorithm expressed as a program fits this definition. A program that implements an algorithm is the representation of the algorithm in the programming language. As noted in the earlier entry, this perspective implies a three-fold division: the thing being modeled, the conceptualization of the thing, i.e., the conceptual model, and the externalization or representation of the conceptual model in a well-defined language.

The difference between this position (and the one taken in "If a tree …") and that taken by most science and engineering education is that this position takes our conceptual (mental) models seriously. Most of science attempts to avoid talking about mental models. Even though all of science is about ideas — how are we to understand the world — we don't like to talk about ideas because we can't hold on to them. Thus we talk about representations, typically mathematical representations, of ideas. That's fine — except when we lose track of the fact that the point of it all is to understand the world — and then perhaps to use that understanding to add new things to the world. (See "turning dreams into reality" below.)

This position also argues for teaching computer science students as many (mental) modeling techniques as we know — including mathematical techniques. As noted earlier, the point is not to teach mathematics as mathematics but to teach mathematics as a language for expressing models. The same argument could be made about many other disciplines. Most disciplines develop models of their domains. It would be useful for computer science students to learn as many ways as possible to conceptualize the world. The challenge then would be to find ways to make models developed in other disciplines expressible in an executable form, i.e., as software.

Of course most software does not attempt to model something that exists in the world prior to the software. Most software is the development of something new. But consider how our intellectual abilities work. We evolved the ability to conceptualize the world presumably because it was a useful survival capability. But once we developed that ability, we used it to invent new worlds — to write fiction — as well as to conceptualize the world as it is. The same is true for software. Most of the software we write involves developing models of things that don't exist elsewhere. But the ability to build software that models anything is grounded on the ability to write software that implements models in general — of both existing and imaginary elements. So it makes sense to argue that computer science is the study of building models and that anything that helps develop that ability will contribute to one's ability to write better software. Abstraction, then is the factoring out common elements in model building and understanding those common elements as conceptualizations on their own.

My department is in the School of Engineering, Computer Science, and Technology. We recently had a school-wide open house in which we wore T-shirts that said "Turning dreams into reality." Although I don't for the most part think of Computer Science as an engineering discipline, I agree that we have that in common. We are both creative disciplines in which we create visions, which we then transform into something that exists in the material world. (Even software and the cyberspaces that it creates exist in the material world.)

Engineering is said to be the application of science to practical ends. If one thinks of engineering as transforming dreams to reality, it is far more than that. But science is what underlies engineering. It provides the understanding that allows the engineer to transform a dream into reality.

Perhaps in the same way, modeling serves as the "science" of computer science. It's the understanding of models and how models work that allows software developers to transform their dreams into software reality.

What does religion say about the natural world?

[This is a repeat of a previous post. I've been adding labels to posts. For some reason blogger refused to add a label to the earlier version of this one.]

In the debate between Andrew Sullivan and Sam Harris (and in many other debates about religion) a central question often seems to be whether there is any "scientific" evidence for any of the supernatural claims religion makes. In my comments, I've made the point (somewhat implicitly) that this is the wrong question. Here I'd like to clarify this point. It seems to me that the important question is not whether secular human inquiry can confirm supernatural religious claims but whether religious claims can add anything to secular human inquiry about the natural world.

Most modern adherents of religion would say that religion can't add anything to secular human inquiry about the natural world. Only certain fundamentalists and other religious intransigents now claim that religion can compete with science or other forms of secular human inquiry in determining facts about the natural world. Religion no longer claims to have a position about issues such as evolution or whether the earth orbits the sun. Issues such as these are left to secular inquiry.

Recent attempts to measure the efficacy of anonymous prayer in healing illustrate the sort of question that might be used to establish a claim that religion can add to our understanding of the natural world. Studies about such effects were first published in reputable journals but later withdrawn. If this work could be put on a more firm foundation, it would provide evidence that religion has something to say about the natural world that is not accessible through other forms of inquiry.

On the other hand, if no such evidence is brought forward, then it seems to me that religion as a theory about the natural world is essentially impotent. One may believe it or not with no intellectual consequences either way.

Of course whether or not religion has anything to add to secular inquiry regarding the natural world, religious beliefs certainly affect how many people act. But that's an entirely different issue.

Word of the day: vector check

an assessment of direction. To do a vector check is to determine if one is going in the right direction. Generally used to mean to check with one's manager to see if a project is on the right track.

Tuesday, April 24, 2007

Abstraction, analysis, and patterns

In previous pieces (see abstraction), I discussed abstraction in computer science and mathematics. It has been suggested that what I'm really after is not abstraction at all but analysis. There may be something to that.

I'm not completely comfortable with the term analysis because it carries with it a sense of dissection rather than the discovering of the essence of something. Here, for example are the relevant senses from Merriam-Webster.
1 : separation of a whole into its component parts

2 a : the identification or separation of ingredients of a substance b : a statement of the constituents of a mixture

3 [the branch of mathematics referred to as analysis].

4 a : an examination of a complex, its elements, and their relations b : a statement of such an analysis

The sense of analysis that comes closest to what I mean by abstraction is sense 4.

Interestingly enough, again comes up with a relevant definition.
a method of studying the nature of something or of determining its essential features and their relations: the grammatical analysis of a sentence.
This sort of analysis is something like what we call systems analysis, i.e., the analyzing of a problem domain prior to attempting to find a system that will solve a problem for that domain.

I was unable to find a good definition of systems analysis. Most of them were much too concrete. Wikipedia, however, has this entry for problem domain analysis.
the process of creating a model describing the problem to be solved.
This says that analysis is the creation of a model for some domain. Although that's a reasonable way of putting it, this only raises the question of how we want to define model.
An abstract model (or conceptual model) is a theoretical construct that represents something, with a set of variables and a set of logical and quantitative relationships between them. (Wikipedia)

A description of observed behaviour, simplified by ignoring certain details. (Free online dictionary of computing)

a system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs; also : a computer simulation based on such a system < climate models > (Merriam-Webster)

a simplified representation of a system or phenomenon, as in the sciences or economics, with any hypotheses required to describe the system or explain the phenomenon, often mathematically. (

A representation of something, often idealised or modified to make it conceptually easier to understand. (
The problem with these definitions is their emphasis on simplification rather than on finding the essential elements of the thing being modeled. Perhaps a reasonable definition of model is
a representation of something in a well-defined language.
I like this definition, although it may include virtually any explication. It implies a three-fold division: the thing being modeled, the conceptualization of the thing, i.e., the conceptual model, and the representation of the conceptual model in a well-defined language.

Debora pointed out that studying mathematics is good way to learn how to do modeling in this sense — at least the sort of mathematics that one does when learning how to solve word problems. In doing word problems one is asked to translate a problem statement expressed in English into mathematical equations — and then to solve those equations. In effect, one is asked to build a model. Of course this isn't doing mathematics so much as learning mathematical modeling techniques — and then being asked to solve the equations that result from expressing a problem in terms of those modeling techniques.

It would be useful to computer science students to teach them mathematics in this way, i.e., as a way to model problem domains. In most mathematics courses, the use of the mathematical tools for modeling purposes tends to be taught as an afterthought or as an exercise. It would be much more useful (at least to computer science students and to most non-mathematics students) to teach the subject as modeling tools rather than as formal mathematics.

It would be even better if the modeling aspect were taught separately from the solving of the equations. There are many systems available that can solve equations once they are generated. It would be useful to teach mathematics to non-mathematicians by using those systems. The goal of the course would be to teach students how to use a mathematical modeling language as a way to characterize a problem. Once a problem were characterized, the tool would use the equations to help one understand the problem from various perspectives.

This would mean, for example, teaching a course in system dynamics to computer science students rather than teaching a course in partial differential equations. System dynamics could be taught much earlier and with far fewer mathematical prerequisites than today's courses in partial differential equations.

Of course, those students who wanted to know how the system solved the equations or how to build a system dynamics modeling system would have to learn the necessary mathematics. But for most computer science students, it would be quite valuable to learn how to express relationships in terms of partial differential equations even if they relied on software to solve or simulate the effects of those equations. This is analogous to the way we teach programming languages. We teach all students how to write in a programming language. Only some students learn how to write compilers.

I think that if mathematics were taught that way it would in fact help computer science students learn how to think more abstractly. Each such mathematical modeling framework would be another tool they could use to understand a situation from one or another abstract perspective.

Certainly learning a modeling language or framework is not the same thing a creating one, which is the true joy of abstraction. But the more modeling languages or frameworks one knows, the better one is likely to do when asked to analyze and understand a new problem.

In software we have developed the notion of a design pattern, which is, in effect, an approach to a particular kind of design problem. In software, we have also created the notion of refactoring, which means to extract out a feature of software that is more general than the particular problem being solved and use a generic version of that feature. When these two notions are used together, one gets a particularly powerful form of abstraction and modeling. An example will probably help.

Recursion is a technique for processing recursive data structures such as tree-structured graphs. One writes software that operates on a node of the tree and then calls itself to operate on the subnodes of that node. Thus one has to write the processing part only once and use recursion to have those processing steps applied to all tree nodes. At one time recursion was considered a sophisticated technique. Now it is taught in lower division computer science classes.

Still, recursion is a powerful programming technique. But the recursive aspects of it can be factored out into what is known as a visitor pattern. A visitor pattern is a software structure that accepts as one of its two inputs a tree and as the other a chunk of software. The visitor pattern traverses the tree and applies the processing software to each node. In this way the recursiveness has been factored out of the processing software — which now need not worry about anything but the processing steps — and encoded separately in the visitor pattern. Even more generally a visitor pattern can be applied to any data structure. It need not be a tree.

In this way we have separated the concerns (which is a common phrase in software these days) of the processing code from the data structure traversal code. In doing so we have created the visitor pattern abstraction. It is this sort of abstraction that seems to be particularly common in software. The more one studies software, the more one is able to separate concerns. Each time one does that, one has created a new abstraction. Perhaps because software deals with such a broad range of subject matter, there seems to be no end to the ability of computer science people to find new abstractions.

There is even a theorem to this effect. In algorithm information theory, it is undecidable whether a particular Turing machine is the most compact representation of a given string. There may always be a more abstract representation.

And speaking of separating concerns, the notion described above of teaching mathematics as a modeling language separates the concern of mathematics as a way of characterizing a problem domain from mathematics as a way of solving equations. It would be worthwhile (to computer science students at least) to separate those concerns.

Word of the day: deep dive

I found one online definition. NetLingo Internet Dictionary | Online Dictionary of Computer and Internet Terms, Acronyms & Text Messaging
Slang for exploring a subject in-depth.

For example, 'We did a deep dive on that market and found nothing of value there.'

Monday, April 23, 2007

Limbo is a reductio ad absurdum

In this post I noted with surprise that people take the notion of limbo (where children who die before being baptized go) seriously. It now strikes me that the problem raised by the question of where children who die before being baptized go is really a reductio ad absurdum and should have stopped a lot of theologizing a long time ago.
  • The innocent should go to heaven.
  • Newborn children are innocent.
  • No one who dies before being baptized goes to heaven.
  • Therefore, newborn children don't go to heaven even though they are innocent. But this is a contradiction. So one of the premises or underlying assumptions must be wrong.
Catholics used to solve this by claiming that even newborn children were not innocent. But apparently the Pope is now questioning this premise.

A simpler (and to my mind preferable) solution would be to give up the ideas of innocence, baptism, and heaven entirely. They may be comforting stories one can tell oneself, but as the preceding shows they don't really work as rigorous thought.

I wonder how much mental energy has been devoted to attempting to deal with issues of this sort.

Word of the day: to task; tasker

To order/request that a task be performed. The order/request itself.
The president tasked us to determine the best way to reduce current corporate expenses. The tasker arrived by email this morning. It was assigned to you.

Sunday, April 22, 2007

What is abstraction?

In previous posts I discussed computer science, mathematics, and abstraction, arguing that computer science was the best discipline for teaching abstraction. I didn't attempt to define what I meant by abstraction. Jeff Kramer characterized abstraction to be "the process of selection and removal of detail." Here are some dictionary definitions (thanks to that take a similar position.
to consider something as a general quality or characteristic, apart from concrete realities, specific objects, or actual instances. —

to consider theoretically or separately from something else — Oxford Compact Dictionary

to consider (a quality, for example) without reference to a particular example or object. — American Heritage Dictionary

to consider a concept without thinking of a specific example; consider abstractly or theoretically —
The problem with these definitions is that they assume that one already knows what it is that needs to be abstracted, i.e., what the unnecessary details are that should be stripped away, what the general qualities or characteristics are that one is considering, what it is that should be considered theoretically or separately or without reference to a particular example.

But that's not the sense in which abstraction is used in computer science — and which I believe is the more important usage. The ability to abstract is not the ability to consider a quality without reference to particular instances. It is the ability to get to the essence of something, to decide what's more and less fundamental in an situation. Abstraction in this sense is a creative activity. It involves understanding a situation deeply and seeing the essential features. In the process one does strip away the contingent elements and the characteristics that appear because of a particular instance. But the stripping away is done creatively as one determines what is essential and what is superficial (for some particular problem).

In important cases, abstraction involves creating the abstract ideas that characterize the essential elements. That's the hardest but most fulfilling part of abstracting. When one finds a way to see something that makes it clear what's really going on, how all the pieces fit together, that, I would say is the real process of abstraction.

A personal experience illustrated this for me. About a year ago I wrote a paper which I called "Open at the Top; Open at the Bottom; and Continually (but Slowly) Evolving." In it, I talked about the US Postal System as an early example of a communication infrastructure that fostered and enabled innovation. The USPS was, in effect, and early version of the world wide web. Postal addresses were like web sites. It was easy to create one. It allowed individuals and companies to interact with each other. Mail order businesses flourished. Etc.

In the paper I struggled to characterize just what it was about the postal system (and current systems like the web) that allowed this kind of creativity to grow up around them. The title was my answer. Like the web, the postal system defined a protocol for interaction. It didn't matter how that protocol was implemented: the pony express, hand delivery, post office boxes, user of commercial airlines, copper wire, fiber optics, satellites, etc. What was important was that the protocol worked no matter how it was implemented. That's what was meant by open at the bottom.

Also the protocol allowed user to be creative with it. One could establish a mail order business or a personal address just as now one can establish a web business or a personal website. That's what was meant by open at the top.

The protocols can even change, e.g., the price of postage increases; zip codes are introduced; new versions of html become standardized. But as long as the change happens slowly enough the infrastructure persists and people can continue to use it.

I thought this was an important idea and that I had a pretty good sense of what was going on. But it wasn't really crisp.

Late last year, I came across the idea from economics of multi-sided platforms. See, for example, this interview with Andrei Hagiu, which I didn't see until after his book Invisible Engines: How Software Platforms Drive Innovation and Transform Industries was published last Fall. (It can be downloaded free.) The notion of a multi-sided platform was what I needed to complete the abstraction.

From the economic perspective, a multi-sided platform is a business that sells a mechanism that brings two groups of users together. A shopping center is a multi-sided platform. The users are buyers and sellers. Note how different this is from a traditional business in which one sells either a service that one performs or a product that one either (a) constructs from components that one buys from suppliers or (b) buys at wholesale and makes available at retail. The owner of a multi-sided platform sells access to a group of users to the other group of users. Any advertising medium is a multi-sided platform. The insight of recent economics researchers is that the postal service and the world wide web are also multi-sided platforms. The book by Hagiu, et. al. has the further insight that many software systems are multi-sided platforms. An operating system, for example, brings software developers and software users together.

My next step was to conceptualize multi-sided platforms in computer science terms. From a software perspective, a multi-sided platform is a level of abstraction that has multiple kinds of users. We all know that in Computer Science a level of abstraction is a set of operations and data types that are defined independently of their implementation. In computer science we frequently refer to a level of abstraction as a platform. A level of abstraction is, of course, open at the bottom. One doesn't care how the abstractions it defines are implemented as long as they perform as advertised.

A level of abstraction is valuable to the extent that the operations it defines can be used creatively, i.e., open at the top.

Levels of abstractions (or standards, which are specifications of levels of abstraction) may change. But they can't change too fast or too frequently or they will use their user base.

A level of abstraction that has two or more distinct sets of users is a multi-sided platform. It is the multi-sided platforms that enable communication among different groups of users. The browser is a multi-sided platform in that it brings together web site providers with web surfers. The protocol is html.

So finally, the right abstraction for the postal system and the web is a level of abstraction that is used by multiple set of users, i.e., a multi-sided platform. That's the essence of what I was searching for in the paper about the postal service. I discuss this clearer characterization in section 6 of "Putting Complex Systems to Work," a paper I wrote last January. If you compare the two papers, you will see how far the abstraction has developed.

So I think it is the finding and clarifying of some essential qualities that characterizes what's really important about the process of abstraction. Of course this is what creative mathematician do as well. It is what creative thinkers in any discipline do. But I believe that it is this sort of abstraction that the study and practice of Computer Science teaches better than the study of any other discipline.

Saturday, April 21, 2007

It's unbelievable to me that people take this serirously

TIME reports that "The Pope Banishes Limbo."
Pope Benedict XVI has reversed centuries of traditional Roman Catholic teaching on limbo, approving a Vatican report released Friday that says there were "serious" grounds to hope that children who die without being baptized can go to heaven.

Theologians said the move was highly significant — both for what it says about Benedict's willingness to buck a long-standing tenet of Catholic belief and for what it means theologically about the Church's views on heaven, hell and original sin — the sin that the faithful believe all children are born with. Although Catholics have long believed that children who die without being baptized are with original sin and thus excluded from heaven, the Church has no formal doctrine on the matter. Theologians, however, have long taught that such children enjoy an eternal state of perfect natural happiness, a state commonly called limbo, but without being in communion with God. "If there's no limbo and we're not going to revert to St. Augustine's teaching that unbaptized infants go to hell, we're left with only one option, namely, that everyone is born in the state of grace," said the Rev. Richard McBrien, professor of theology at the University of Notre Dame. "Baptism does not exist to wipe away the "stain" of original sin, but to initiate one into the Church," he said in an e-mailed response.
The image is from Ellen von Unwerth's 1998 photography exhibit "Original Sin." This is how it's described in John Hopkins University Press's Performing Arts Journal.
Inspired by E. J. Bellocq's turn-of-the-century photographs of prostitutes in Storyville, New Orleans, the artist has concocted a creole stew of sexual high jinx, picturesque decay, and exotic poverty. Acted out among ratty Victorian furnishings, her soft porn scenarios feature a cast of half-naked models dressed in scanty garments that call to mind moth-eaten Victoria's Secret lingerie. Hilarious French titles summon up Playboy at Night: Hot Rain, Burning, Feline, Thirsty.

On a pink wall, a sign saying "Tequila Sauza Estate Collection presents Original Sin" greeted the viewer at the door, alongside "Gallipettes," a large color print of a pair of legs clad in black boots and holey black stockings waving in the air. In "Mardi Gras," a nude model wearing the same black stockings and boots stands up on a rumpled bed, her only other clothing heavy lipstick and a black domino mask. Nude outdoors, the same model pulls on her stockings at night among blurred street lamps in "Minuit Rue Royale." A few landscapes provided a setting of vine-draped trees leaning over decrepit shacks and fetid swamps, and in other small photos, a narrative of sex and possible murder inside the shack begins to form. "Après-midi" showed a nude man lying on a stained floor; above him leans the same nude woman in her boots and stockings, pushing down on his tattooed chest. Hand prints cover a pair of thighs in "Mains Curieuses." Could it be blood?
The (Russian) model is Tatiana Zavialova. There is a worshipful English-language (Russian, i.e., .ru) web site devoted to her.

Word of the day: exfiltrate

Here's the definition.
–verb (used without object)
1. to escape furtively from an area under enemy control.
–verb (used with object)
2. to smuggle (military personnel) out of an area under enemy control.
I just heard the word used (with an object) to refer to the unauthorized sending of information out of a computer—or more generally out of an organization—e.g., by spyware.
The worm infiltrated the corporate IT system and then exfiltrated the company's personal records.

Friday, April 20, 2007

Rosa Brooks, "We're not all victims"

Rosa Brooks has the courage to point out the self-indulgence we show by pretending to be victims of the shootings at Virginia Tech.
Convincing ourselves that we've been vicariously traumatized by the pain of strangers has become a cherished national pastime. Thus, the Washington Post this week accompanied online stories about the shooting with a clickable sidebar, 'Where to Find Support' — apparently on the assumption that the mere experience of glancing at articles about the tragedy would be so emotionally devastating that readers would require trained therapists.

At the University of Buffalo, more than 500 miles from Virginia Tech, university counselors announced that they were 'reaching out to students feeling affected by … the tragedy.' In Dallas, area chaplains rushed (uninvited) to Blacksburg, Va., to 'be part of the healing process.' In Washington, an ordinarily hard-nosed corporate law firm e-mailed attorneys a list of 'resources for coping with traumatic events.'

Count me out. There's something fraudulent about this eagerness to latch onto the grief of others and embrace the idea that we, too, have been victimized. This trivializes the pain felt by those who have actually lost something and pathologizes normal reactions to tragedy. Empathy is good, but feeling shocked and saddened by the shootings doesn't make us traumatized or special — these feelings make us normal.

Our self-indulgent conviction that we have all been traumatized also operates, ironically, to shut down empathy for other, less media-genic victims. On the day of the Virginia Tech shooting, for instance, Army Sgt. Mario K. De Leon of San Francisco (like the Virginia Tech victims) died of "wounds sustained from enemy small-arms fire"). On Wednesday, car bombs killed at least 172 people in Baghdad. But no one has set up a special MySpace page to commemorate those dead.
Great column! Rosa Brooks is one of the new op-ed stars. According to what is apparently her web site, she "is a columnist for the Los Angeles Times [which is where I see her columns] and a professor at the Georgetown University Law Center. (She is currently on leave from Georgetown to serve as Special Counsel at the Open Society Institute in New York)."

Computer science, mathematics, and abstraction

In a recent blog piece, I argued that mathematics is not particularly effective at teaching abstraction. That piece was in response to Jeff Kramer's April CACM article, "Is Abstraction the Key to Computing". Jeff replied as follows.
We do have anecdotal evidence that those students that have good mathematical skills and have been though our discrete mathematics courses are better equipped to perform formal modelling. This tallies with my intuition that manipulation of mathematical symbols is analogous to manipulating abstractions.

However, as mentioned in my article, students still find it difficult to select the abstractions themselves, i.e., that they still need to learn the process of selection and removal of detail which is an essential skill. Perhaps this is because most mathematics courses present the students with the abstractions rather than requiring them to select it for themselves!

Hence, my feeling is that although it may be that mathematics is not the key to abstraction, it is helpful and could be improved to include more use of abstraction skills.
Yes I agree. The problem is that when one learns mathematics one is taught to deal with abstractions that have been created by someone else. One is not taught the process of creating abstractions.

Regarding students with good mathematical skills, my guess is that good mathematical skills correlates to some reasonable extent with the ability to abstract but that learning additional mathematics does not improve one's ability to abstract as much as learning more computer science.

In fact I suspect that students who spend their time learning additional computer science improve their ability to abstract more than students who spend the same amount of time learning additional mathematics — the marginal value of computer science for abstraction is greater than that of mathematics. That would be interesting to design an experiment to test that hypothesis.

I would go even further and hypothesize that learning more computer science enhances one's ability to abstract more than the same amount of time spent learning any other discipline — that the marginal value of computer science for abstraction is greater than that of any other discipline. Any takers?

The unreasonable effectiveness …

I posted a version of the following comment on Alexandre Borovik's Mathematics Under a Microscope.
It strikes me that a bigger problem [than why mathematics is effective for physics] is why there are regularities in mathematics itself. I'm not a mathematician, and I don't know why Lagrange’s theorem (every integer can can be expressed as the sum of 4 squares) is true. But it bothers me that it is.
As an aside, I have asked students to write a Haskell program to find those integers. The "greedy algorithm," which builds the set by adding the largest integer whose square is no larger than the remaining sum, doesn't always work. The smallest number for which it doesn't work is 23. The greedy algorithm would generate {4, 2, 1, 1, 1} as a set of integers the sum of whose squares is 23 = 16 + 4 + 1 + 1 + 1. But that set has 5 elements. A set with 4 elements is {3, 3, 2, 1}. 23 = 9 + 9 + 4 + 1. But 4 is not a member of {3, 3, 2, 1} even though 4 is the largest integer whose square is no larger than 23.
The question I'm asking is why such a regularity should exist in the first place. The natural numbers are such a simple thing. Why should something as strange as that be the case?

It seems that no matter how simple a structure, there is always some hidden additional structure within it. Why is that?

Andrew Sullivan on Faith

I find Andrew Sullivan quite brave in the final post of his debate with Sam Harris. He writes
You then ask why I should find it is so hard to imagine my non-existence? Your good points have made me realize more fully why I feel the way I do. The reason I find it so hard to imagine, I realize, is that I believe that God loves me. …

The reason I cannot conceive of my non-existence is because I have accepted, freely and sanely, the love of Jesus, and I have felt it, heard it, known it. He would never let me go. And by never, I mean eternally. And so I could never not exist and neither could any of the people I have known and loved.
I find it amazing for someone who lives in today's secular society to say something like that — Sullivan especially, who lives by his intellect. Yet there it is. As I said, I think it's quite brave of him to be as open about his faith as he is.

Of course I think he's mistaken. The mistake he makes in my view is quite simple. Sullivan is confusing his subjective experience (of feeling something, which he calls the love of God) with a fact about the world. Subjective experience, and especially our conceptualization of our subjective experience, is just that, subjective experience. It is something that goes on in our minds. This is not to take a dualist position. What goes on in our minds has a physical basis. But it's a level of abstraction above that physical basis. (This is a fairly long discussion. But a fundamental example here is that the software that implements the conceptual models we use to run our lives is a level of abstraction above the electrons flowing in computers. The electrons are required to run the software, but the conceptual models they implement, conceptual models like words on a blog, are not the same thing as electrons. See my papers on emergence for a fuller discussion.)

So the point is that the love that Sullivan feels cannot be taken to imply the existence of a lover. Yet that's exactly what he does. It's such a simple mistake that I'm surprised that neither he nor Harris comments on it. As I quote Richard Feynman as saying,
Science is what we have learned about how not to fool ourselves about the way the world is.
And as I quote myself as saying,
Buddhism and humanistic psychology are what we have learned about how not to fool ourselves about our subjective experience.
We must treasure our subjective experience. That's what makes us human. But we can't automatically turn our subjective experience into beliefs about the world. As Blue says about our species,
Humans: smart enough to have ideas; foolish enough to believe them.

Power, status, and age

I don't like the fact that I'm getting older. But one nice thing about it is that I seem to feel that the status gap between me and the more powerful is shrinking. When we were kids, the status gap was not that huge. (Perhaps it was bigger than I remember. It certainly wasn't that huge when we were very little kids, though.)

As we grow older, the status gap grows with us. My guess is that it reaches its maximum at about 50 - 60, when the very successful are near the pinnacle of their careers.

But after that it starts to decline. The older one gets, the clearer it is that power and status mean less and less. We are all subject to the physical frailties of age. Money and power don't change that. Certainly the rich and famous can buy better medical care. But for people who have some basic medical support that doesn't make that much difference. We are all going to die. (Perhaps that will change sometime this century, but that's another story.) For now, there is some comfort in feeling the power and status gap shrink as I get older.

Monday, April 16, 2007

Computer Science, Mathematics, and abstraction

In a recent blog piece, I argued that mathematics does not help computer science students develop their sense of abstraction.

Mitchell left of a comment referring to an article on mathematics and abstraction. It makes the case that mathematics is all about abstraction, but we don't teach it that way. (I more or less said that we don't teach mathematics as abstraction also. So we aren't that far apart.) This is definitely a nice article, and I like its emphasis on mathematics as abstraction.

There does seem to be a difference, though. Computer science artifacts, i.e., software, are grounded in executability. From that grounding, one can factor and abstract elements to one's heart content and be sure, as long as one has done it correctly, that the result will be not only executable but more powerful and more elegant.

In mathematics, where is the grounding? The article uses numbers as an example of abstracting the notion of size.
No matter what collection you consider, abstract or concrete, it has a number that describes its size; no matter what type of objects your collections are made up of, the results of arithmetic operations will always describe the resulting collection accurately. Thus the simple statement that 2 + 3 = 5 is a statement that describes the behaviour of every possible collection of 2 objects, and every possible collection of 3 objects.
That's all well and good. It's a nice illustration of the mathematical notion of number as an abstraction. But is the step of going from the intuitive notion of the size of a collection to the mathematical notion of number mathematics, or is mathematics the step of formalizing this new notion of number and proving theorems about it? In my experience, mathematics has always been about the latter, i.e., proving theorems about existing abstractions. The actual process of coming up with the abstraction about which theorems should be proved is considered somehow pre-mathematics.

I think that the reason for that is the lack of groundedness in the abstracting process. Computer science is extremely fortunate that its artifacts are grounded in executability. Mathematics does not have that advantage. So mathematics can operate as mathematics only on already abstracted elements. By that time, the joy of abstraction has been lost, and one is stuck proving theorems about someone else's abstractions.

Of course good mathematicians make their own abstractions and then prove theorems about them. That's hard to do. And simply creating abstractions without proving theorems about them is not considered mathematics. One gets no mathematical credit for the abstraction step on its own. (Try to publish a paper that doesn't have any theorems in a mathematics journal.) So I'm still not convinced that mathematics as it is taught and practiced helps computer science students with the process of abstraction.

Saturday, April 14, 2007

How much do you have to know to watch a movie?

From a review of "Year of the Dog" in the New York Times
We are, during the opening of “Year of the Dog,” in seriously overtilled territory, somewhere between Alexander Payne and Todd Solondz. Mr. White lays out the narrow parameters of Peggy’s life with an aggressive lack of visual flair in a style that might be termed, to bend a phrase from Gloria Grahame, Late Nothing. His most obvious choice, one that feels like a misguided shortcut, is to arrange objects and people symmetrically — including Tom McCarthy and Laura Dern as Peggy’s obsessively matched brother and sister-in-law — inside the frame like displayed goods. It’s a standard-issue comedy strategy, fine for Barry Sonnenfeld-level laughs, but it also suggests that there’s nothing beneath these overly clean, tidy lines and, by extension, nothing much beneath Peggy, either.

How things work

From the New York Times
For the past two years, China has protected the Sudanese government as the United States and Britain have pushed for United Nations Security Council sanctions against Sudan for the violence in Darfur.

But in the past week, strange things have happened. A senior Chinese official, Zhai Jun, traveled to Sudan to push the Sudanese government to accept a United Nations peacekeeping force. Mr. Zhai even went all the way to Darfur and toured three refugee camps, a rare event for a high-ranking official from China, which has extensive business and oil ties to Sudan and generally avoids telling other countries how to conduct their internal affairs. …

Just when it seemed safe to buy a plane ticket to Beijing for the 2008 Olympic Games, nongovernmental organizations and other groups appear to have scored a surprising success in an effort to link the Olympics, which the Chinese government holds very dear, to the killings in Darfur, which, until recently, Beijing had not seemed too concerned about.

Ms. Farrow, a good-will ambassador for the United Nations Children’s Fund, has played a crucial role, starting a campaign last month to label the Games in Beijing the “Genocide Olympics” and calling on corporate sponsors and even Mr. Spielberg, who is an artistic adviser to China for the Games, to publicly exhort China to do something about Darfur. In a March 28 op-ed article in The Wall Street Journal, she warned Mr. Spielberg that he could “go down in history as the Leni Riefenstahl of the Beijing Games,” a reference to a German filmmaker who made Nazi propaganda films.

Four days later, Mr. Spielberg sent a letter to President Hu Jintao of China, condemning the killings in Darfur and asking the Chinese government to use its influence in the region “to bring an end to the human suffering there,” according to Mr. Spielberg’s spokesman, Marvin Levy.

China soon dispatched Mr. Zhai to Darfur, a turnaround that served as a classic study of how a pressure campaign, aimed to strike Beijing in a vulnerable spot at a vulnerable time, could accomplish what years of diplomacy could not.

Thursday, April 12, 2007

A civil claims market

Today we attended the UCLA Spring 2007 Faculty Research Lecture by Steve Yeazell. What I found fascinating was the perspective through which legal claims and their resolution through civil litigation could be understood as a market: the defendant, in effect buys the claim from the plaintiff. It strikes me that such a perspective could be taken much further. Much of the civil litigation system could be made into a real market. (One of the questions asked after the talk suggested as much.) Here's how it might work.

The process would start when someone files a claim for damages along with supporting evidence. A date would be set, at which time the evidence submitted by that date would be reviewed by a panel of judges and a value assigned to the claim.

Prior the decision date, the claimant could issue "stock" in his claim and offer the stock for sale. The offer price would reflect the damages claimed. Of course, once the stock was on the market, it could be bought or sold (or sold short) by anyone.

Anyone could, on their own, gather evidence for or against the claim and submit that evidence (under oath) to the court. This could include sworn depositions, videotaped, if one likes, to capture body language. Presumably a person submitting evidence would have bought or sold stock prior to doing so. (Although this is something like encouraging insider trading, it would seem to work in the public interest in this case.)

In effect this would be a way to use market forces to extend the adversarial system. We (apparently) believe that the adversarial system works well to uncover the facts. The two sides are each motivated to discover and present information that supports their respective cases. A market mechanism lets others participate in that process. Anyone with new information (or a clever way of looking at existing information) can monetize that information or argument by buying stock in the claim affected by it.

All the evidence would be public. As it accumulates, the market value of the stock would vary as the market assessed the likely value to be placed on the claim by the judging panel. Such a mechanism would encourage anyone who thinks the market has significantly over or under valued the claim to buy or sell the stock, which would tend to bring the stock's market price closer to the buyer or seller's estimated value.

The plaintiff would have a wide range of options. Similar to a contingency fee arrangement, the plaintiff could use some of his original stock to hire lawyers and investigators, i.e., pay them in stock. More generally, he may at any time sell any or all of his stock, thereby cashing out his claim. Or he could hold on to some or all of it until the very end.

Prior to the decision the defendant would also have a number of options.
  1. He could submit arguments and evidence.
  2. He could buy the stock himself.
  3. He could offer to settle, presumably at something like the current market price. Such an offer would be accepted or rejected by a majority vote of the stockholders of record. This is the most interesting option. Once the market assesses the value of a claim, a settlement at that price should be appealing to both sides, thereby encouraging settlements.
If there were no settlement by the decision date the file would be closed to further submission. The panel of judges would review the evidence and arrive at a value for the claim. The result would be an obligation on the part of the defendant to pay the assigned value to the shareholders.

Could something like this be made to work?

Market effects

Markets, by their very existence, seem to create their own effects. Without markets, there are individual transactions between buyers and sellers. With markets, there are 'market prices' against which other prices are compared. There are speculators, futures, derivatives, traders, arbitragers, etc. Has there ever been a study of market effects, the kinds of phenomena that always seems to occur once a market is created?

Facebook | Terms of Sale

Facebook allows member to give "virtual gifts" to each other. The one I picked cost $1.00. Then I looked at the terms of sale.
Company's virtual gift service allows you to select an image from Facebook's Virtual Gift Store and post it on the recipient's profile along with a message from you (the 'Virtual Gift'). The length of time the Virtual Gift will remain posted on the recipient's profile (unless hidden or deleted by the recipient) will be determined by Company in its sole discretion, but it will remain there for a minimum of two weeks unless hidden or deleted by the recipient. The images and other content included as part of the Virtual Gift is part of the Site Content and is subject to all terms and conditions regarding such Site Content as are set forth in the Terms of Use. In addition, any message that you include with the Virtual Gift must comply with all terms and conditions regarding User Content and User Conduct as set forth in the Terms of Use.

Without limiting any of the foregoing, the Virtual Gift service is a service (notwithstanding any use of the terms 'purchase,' 'buy,' 'sell,' 'order' or the like on the Site or in these Terms of Sale),and neither you nor your recipient obtains or retains any rights or ownership interest of any kind in or to any Virtual Gift you send or receive through the Virtual Gift service, and neither you nor the recipient may reproduce, distribute, transfer, modify or otherwise use the Virtual Gift in any manner other than as expressly authorized by Company.
[Emphasis and paragraphing added.]

All use of the Virtual Gift service is for your personal, non-commercial use only.
Apparently neither the giver nor the recipient gets anything for the $1.00. Why does Facebook want to annoy its users like that?

Sunday, April 08, 2007

Religion and government

We had a discussion at dinner tonight about religion and government. Why should one want to keep church and state separate? It's taken as such an axiom that we don't hear much of a discussion about why it's important.

My theory—which now that I'm coming to write it down will probably sound pretty simple-minded—is that we want to keep religion out of government because we don't want the government telling us what to believe. In some basic way it seems to me to be as simple as that. Government celebrations of religious holidays, for example, or government sponsored prayers at public events tell people that those prayers or holidays celebrate a way of looking at the world that the government supports. One is forced either to accept that perspective or to acknowledge that one is out of step with how the government is saying the world is.

Government should not be supporting metaphysical and theological positions—or at least those who want to keep government and religion separate argue that government should not be making these kinds of assertions.

But what about public education? Isn't that government telling us what to believe? It probably is, and if one were consistent about wanting to keep government out of our heads, I'm not sure how one would justify government run education.

Debora suggests that a distinction can be made between belief and knowledge. It's ok for the government to teach people arithmetic, for example. This generalizes to the position that it's ok for the government to teach the fruits of secular investigation, such as science, history, etc. In some (perhaps many) cases, it may be difficult to draw the line between secular investigation and sectarian beliefs.

As Debora again pointed out, among the most difficult areas would be the teaching of values like honesty, good citizenship, and patriotism. Do we want the government to be teaching those beliefs?

Saturday, April 07, 2007

Dimensions of human experience

It recently occurred to me that people who are celibate, either intentionally or despite their best efforts, miss a dimension of experience—not just sex but intimacy. It's a dimension of experience that can't be reproduced though a combination of other experiences. In mathematics one would refer to it as orthogonal to other experiences—that the term dimension should be taken literally. It adds a new dimension to one's range of possible experiences.

The next thought was to ask whether anyone has attempted to identify the dimensions of experience we are capable of. Of course, that's probably too simplistic a notion. Debora asked whether eating chocolate cake is fundamentally different from eating steak? It would probably be very difficult to find a collection of mutually orthogonal experiential dimensions. But it would be interesting to try. I wonder whether such an attempt has been made and with what sort of result.

Sociable Darwinism

Natalie Angier is a wonderful science writer. She recently reviewed David Sloan Wilson's Evolution for Everyone: How Darwin’s Theory Can Change the Way We Think About Our Lives. (Click here for all my posts about this book.) She says that according to Wilson
all of life is characterized by a “cosmic” struggle between good and evil, the high-strung terms we apply to behaviors that are either cooperative or selfish, civic or anomic. The constant give-and-take between me versus we extends down to the tiniest and most primal elements of life. Short biochemical sequences may want to replicate themselves ad infinitum, their neighboring sequences be damned; yet genes get together under the aegis of cells and reproduce in orderly fashion as genomes, as collectives of sequences, setting aside some of their immediate selfish urges for the sake of long-term genomic survival. Cells further collude as organs, and organs pool their talents and become bodies. [Emphases added.]
In other words, commons and cooperatives are bottom up constructions, like almost everything else. In much of human history, however, the mechanisms that commons use to govern themselves have sometimes been taken over by tyrannically/dictatorial forces to make the commons a top-down structure serving those who control the governance mechanisms. Of course since it happens, that too is part of nature. But it's worth recognizing that commons are fundamentally bottom-up creations.

Angier goes on paraphrasing Wilson as follows.
The conflict between being well behaved, being good, not gulping down more than your share and being selfish enough to get your fair share, “is eternal and encompasses virtually all species on earth … . [I]t is predicted at … a fundamental level by evolutionary theory.” How do higher patterns of cooperative behavior emerge from aggregates of small, selfish units? With carrots, sticks and ceaseless surveillance.

Humans are equipped with all the dispositional tools needed to establish and maintain order in the commons. Studies have revealed a deep capacity for empathy, a willingness to trust others and become instant best friends and an equally strong urge to punish cheaters, to exact revenge against those who buck group rules for private gain.

[Emphases added.]
What strikes me about all this is how contradictorally centrist it is. It's the implicit wisdom of the broad political center. People are fundamentally good — but let's not do away with our police force. We now have science to back up that instinct.

The larger lesson is to understand the naturally occurring mechanisms that have been successful in establishing well-run commons and then to apply them both to our current commons and to the new commons we continually find ourselves building.

Friday, April 06, 2007

Mathematics and abstraction

This month's Communications of the ACM (CACM) has an article that asks (apparently rhetorically) whether abstraction is the key to computer science. It argues, of course, that it is. It also claims that mathematics is a good way to teach abstraction. Here's a letter I've sent in reply.
Abstraction is certainly the key to computing, but mathematics is not the key to abstraction. Abstraction is all about ideas, finding the essence of something. The key to computing is figuring out what is really important in a incompletely understood situation and developing a conceptual framework that captures its essence.

Mathematics, especially as taught at the undergraduate level, has nothing to do with helping students get to the essence of something. Undergraduate mathematics is primarily taught as instruction in a foreign language. Students are required to learn specific, previously defined concepts, and they are required to learn a new and often opaque notation in which those concepts are expressed. As typically taught, undergraduate mathematics may provide students with powerful tools, but it has nothing to do with learning about the process of abstraction.

Even worse, mathematics is sometimes taught strictly as a formalism: if the symbols can be manipulated properly one doesn't really care what they mean.

A nice way to teach computer science students about mathematics as an abstraction process would be to teach it as analogous to software design patterns. Software design patterns give students useful ways to think about abstract aspects of software. Mathematics can be taught as useful ways to think about abstract aspects of problem spaces. If taught from that perspective, mathematics can help undergraduate computer science students develop their intuition about abstraction at the same time as they are mastering mathematical tools and concepts.

Tuesday, April 03, 2007


9 Chickweed Lane has a nice definition of faith.

Since they don't keep the images available indefinitely, here's the text.
Faith: the unknowable promoted to the irrefutable.

The Drug Wars

The United States has spent the past 30 years fighting the so-called war on drugs. Americans have paid a heavy price both financially — the drug enforcement budget is now $40 billion — and with their civil liberties with laws that turn 'innocent until proven guilty' on its head.

During the week of October 9th, NPR News airs a series from correspondent Deborah Amos, War on Drugs, on All Things Considered that explores why, after three decades of effort and billions of dollars in expenditures, America's war on drugs has no victory in sight. Coverage includes a look at Mexico, money laundering, corruption and drug treatment.
Neither this series nor any discussion of the drug problem that I have seen has covered the best and most obvious solution: safe, affordable, and effective substitutes for illegal drugs. Clearly a great many people desire the effects that they can get from drugs. We should be encouraging the development of means that will produce those effects legally and safely—for both the user and society.

Why don't we? Presumably it's our puritanical culture. No so-called mind-altering substances are acceptable within that culture. Why is that? I no longer know. We are perfectly willing to accept anti-depression, anti-psychotic, and anti-anxiety drugs. Why we as a culture reject other sorts of mind-altering drugs is more and more of a mystery. It's time we woke up and started to sponsor research into drugs (or other means) that would produce the sorts of effects that people who take illegal drugs want.

This week NPR is doing a number of follow-up stories on methamphetamine. Prohibition didn't eliminate the desire of many people for the effects of alcohol. The war on drugs won't eliminate the desire of many people for the effects produced by drugs, like meth, that remain prohibited. Let's find safe, affordable, and effective substitutes for illegal drugs. We can do it. We are, after all, a market driven economy. The consumer is always right. Let's put that ethic to work to solve the drug problem. Let's stop acting like each other's nanny and begin a serious effort to find ways to satisfy this human need in a rational way.

Sunday, April 01, 2007

The joys of software

Steve Hsu has a blog called Information Processing. He has a piece about software. I left the following comment—toward the bottom. (It's his picture to the right, but my comments follow.)
For me there are two primary joys to software. The first is the creation of something real in the world. The first time I created a successfully running program (a very! long time ago), I thought that this was the closest a man could come to knowing what it was like to give birth.

The second is the joy of abstraction. It's probably the same for physicists: the realization that there is an abstraction that incorporates a number of previously disparate elements as special cases. It's like climbing a mountain. At each level, one gets a better view of the terrain. But at each level one catches glimpses of yet higher peaks with better views.

Computer science has been called applied philosophy. It's the only discipline that has created languages that can both (a) represent ideas in one's head and (b) produce results in the physical world. Software is the externalization of thought in a symbolic and executable form. No other discipline can claim anything like that.

This is terrible

What's wrong with this picture?

It takes a a bit of staring before you see it. Don't give up too soon! It should pop out at you after about half a minute.

Credit to Wired.