Monday, September 15, 2014

Constructive Complexity: Follow-up notes

You may also be interested in 
Just a couple of notes about constructive complexity.

Originally this was going to be part of an article entitled Constructive Complexity Today, where I'd go over stuff like body modification and ecological conservation and whatnot and why these things would be considered Good in any ethical system that held constructive complexity as its terminal value. Then I realized that most of it was superfluous fluff.

I guess that the polyamory one might bear some discussion later on, but still the majority of the article could be summed up as "These things: they are good. Bcz net complexity." That is to say, until it's proven to the contrary I'm going to assume that if you think that paving over the whole entire world is a good thing, then our disagreement lies in our terminal values and not in how we're interpreting constructive complexity.

There are still some things worth saying, though.
Table of Contents
I. Agency, or Liberty
II. A Quick Note on (Transhumanist) Abolitionism
III. Constructive Complexity vs. Alternative Terminal Values
IV. Why Love Doesn't Work (as a terminal value)
I. Agency, or Liberty

To the extent that an agent is restricted from acting, ze ceases to be "one that acts" but becomes "one that is acted upon." An agent that is totally restricted from acting for zemself is the functional equivalent of a rock, albeit one that is conscious of its powerlessness.

Agents are more complex by nature of their ability to act for themselves, and among other means their complexity increases as they have greater power to act. In light of this, for the sole purpose of increasing net complexity it is imperative to allow all agents to act for themselves. Agents must even be permitted "the right to self-destruction," though it may be decided reasonable, as in other situations, to temporarily bar them from such a course of action if they are not, as we might say, "in their right minds," just as you might prevent an intoxicated friend from doing something that you know ze would, if sober, never choose to do. Besides this there is a very fine line between a given value of dolors and self-destruction, and dolors are crucial to constructive complexity.

The right to self-destruction is necessary in order to more surely preserve an agent's "right to withdraw" from a situation, relinquishing both the rewards and penalties of further participation. If an agent wishes to withdraw and relinquish all the consequences of further participation, but cannot, then it may be concluded that an element of coercion has come into play, which means that the agent is being restricted and, to the degree that this is true, being made an object to be acted upon according to the will of others. Therefore the right to withdraw and (as the full extension of this) the right to self-destruction, which might be regarded as the ultimate withdrawal when carried out with finality.

Agency can be restricted without reservation only where it is being used to restrict the actions of other agents. Murder prevents the victim from deciding how, and whether, to live zir life. Theft prevents the victim from deciding how to make use of what was stolen.

Where we fear that unrestricted agents may make the "wrong" decision, it better serves constructive complexity to educate those agents than to restrict them.

Some anticipation of motives may be permissible in determining whether an action should be permissible. Attempting to acquire a sample of a virulent disease may be reasonable in a scientific context, but in many other contexts should send up red flags for anyone aware of it. If this anticipatory/proactive restricting of agency is permissible, and if so where the line is drawn, are questions for another time.

Because the potential for complexity-generation increases with the agent's ability to act, we must place a high value on what we could call "human flourishing." That is to say, self-expression and self-development.

II. A Quick Note in re (Transhumanist) Abolitionism

Look, it's like this: You need to get rid of the pain that breaks people, of course, because a permanently broken person is a less complex person. After you get that out of the way, though, any further elimination of our capacity to experience/process pain will unnecessarily reduce the net complexity of the universe. If you make it all voluntary then your so-called dolors aren't, really. You're not truly experiencing dolors, you're experiencing your closest equivalent to them and play-acting at experiencing dolors. It's an exercise in masochism, and while there's nothing wrong with that (our universe sans masochism is less complex than our universe as is) it still means a net complexity loss if this is the only way to be.

III. Constructive Complexity vs. Alternative Terminal Values

Why choose constructive complexity over any other terminal value?

The first thing that should be made clear is that there is nothing that requires a civilization to take this as their terminal value. It seems to me that constructive complexity is nearly mandatory for a civilization to be interested in creating other species and civilizations, and it is certainly a requirement should we wish to transmit the majority of what we might call the "human experience" downriver in time. If I am wrong on either of these counts, I will update my beliefs upon discovering this to be so. Regardless, this doesn't mandate constructive complexity. It just establishes that if a civilization is going to get into the work of creating worlds without number then we can be pretty sure about what its terminal value is.

The importance of setting our affairs in order and appointing one value to rule them all comes because we stand at the brink of being able to shape ourselves on a scale once thought impossible. Geologically speaking, we are not far off from the time when we will either be extinct or capable of fashioning ourselves and our descendants like so much clay, and this both in flesh and mind.

It is important to know what we truly value, lest what we make ourselves not only cares nothing for it, but is apathetic to its loss. A single terminal value must be chosen, moreover, because to do otherwise would be to invite conflict when and if any of them were to come at odds. In such a situation one would be unable to act or eventually choose one over the other, finally elevating the one to the position of sole terminal value, in which case it only makes sense to hold that decision now, when it can be done carefully and with a clear head. Some may try to hold up examples of terminal values that would not ever come in conflict, but further examination would show that they either lack imagination or are holding up two manifestations of the same value.

Hedonism, evolutionary fitness, and knowledge have been treated in the previous article. Of them, only a civilization whose terminal value is knowledge would also be engaged in the creation of civilizations (and not even certainly).

IV. Why Love Doesn't Work (as a terminal value)

It's been suggested to me several times that love would be a suitable alternative to complexity, hedons, and other values. Especially as a motivation to create worlds without number.

But love doesn't work. As a terminal value, anyway.

Looking it up we see definitions like "intense feeling of deep attachment" and "deep romantic or sexual attachment." Not really what's being talked about here. More "putting someone else's needs before your own" or "balancing your own happiness with theirs in a way that makes both of your lives better." Substitute "happiness" for anything else that you'd like.

The issue is that any of these definitions does not, in itself, say what is good or better or desirable. It's like a good of second intent, something that has no value in itself but is useful for its ability to help us get things that do have value. Love isn't a terminal value in itself, just a way of operating in relation to your terminal value. A hypothetical God that valued complexity would act in one way, and a hedonist God in another, but both could consider themselves to be acting benevolently and out of love. They could even both be right, because loving behavior must necessarily change based on one's terminal value, being directly related to it.

No comments:

Post a Comment