Inner Game

by Trigger. trigger at

  • Not everything comes down to how you carry it in the street. I mean, it do come down to that if you gonna be in the street. But that ain't the only way to be.

    – The Wire

    All hierarchies are contextual.

    Even beauty is relative: to the human species, to the opposite gender for heterosexuals, etc.

    Alphaness is defined as control of your environment. But what is your environment? For a Stone Age hunter, it's his tribe. For his tribe, it's the part of the jungle that it roams. The relevant hierarchy is hunting ability, health, physical power. The worse the context, the dumber the hierarchy. There is no added medal for achieving something harder, by putting yourself deliberately in a harder-to-succeed-in context.

    Distinguish real control and fake control. Anger is a display of a lack of control, of fear of the environment. Control within a narrow frame is a narrow control.

    But now? In this day and age, control of your environment can mean many things, depending on the context. It can be social proof within a given social group. It can be money. It can be knowledge of how the world works. You get not only to fight for your status within a given frame, but to freely pick any damn frame you please, among an infinity of options.

    That freedom can feel destabilizing, yet it's fantastic. Widen your horizon, your control. Limited-frame, animal-stage, irrelevant-control people might not get it. Don't care about it. Embrace it, enjoy it, make your choices.

  • Most people love you. Most people want the best for you, what's good for you. But everyone lives within their frame. If they love you, they love you within their frame, the way they see you as you appear on their frame-plane. They want the "best" for "you", given their own definitions of both "best" and "you". Your parents might love you, but love you as their cub, and see the "best" only in relation to their own projects for you as an extension of themselves. The policeman arresting you might love you, but love you as citizen who sinned and must be punished for his own good, the "best" for you being a fantasy disconnected from any reality. The clergyman might love you, but love you as lost lamb, with so much to learn, who's sinned, but who can be generously granted forgiveness, who considers you bad, yet loves you despite, or for your very, flaws (flaws that make the "you" that exist only in his eyes).

    Sometimes your frame-plane crosses their frame-plane in a 3D space. Sometimes it doesn't. Sometimes it's temporary, ephemereal.

    If someone sees you as a bum, you can prove to him you're one of the good bums. You're doing okay, for a bum. If someone sees you as a slave, servant, member of an inferior race, you can do good by them. Work hard, be nice and obedient. Be a good puppy. You'll get a sugar. You'll be doing fine. "Yeah, as far as dogs go, he's doing fine. I don't usually like dogs, but he's one of the ones I can stand". Or, "hey, I despise men, but this one's not so bad, maybe I'll let him fuck me, if he entertains me enough, after he's proved his value to me".

    And if someone's frame is twisted, don't complain if achieving appreciation within it makes you a worthless being from the perspective of any other decent frame.

    A positive appreciation is worthless if it comes from the wrong frame. Only your own frame is relevant for your own existence.

    Different people have different frames, living on different frame-planes in space. If the frames cross, coincide, or are close enough, you can interact.

    If they don't, the frame-gap is too big. You talk the same language, yet they don't hear you, and you don't hear them. You talk the same language in the same sense that a dog barking and a bee buzzing emit sounds in a similar sound spectrum, on the planet in time and space. Yet your Umwelts are entirely different, your communication an illusion.

    Stay within your frame. Find people that share part of that frame, and find a common solid ground with them. Influence them to get closer to your frame. But if the frame gap is too big, don't waste time trying to change their opinions within their frame. Don't teach a dog how to bark if you're not a dog. And don't work hard for a bone from the dog-master intent on hitting you with a stick, either, if you're not a dog.

  • Half of the running of the world can be explained by the mismatch. One of its aspects is our primal adaptation for survival mechanisms, not happy peaceful stable life.

    Think allergies, anorexia, masochism, obsessions, dangerous adventures, heavy sports training, conflicts, wars, religious obligations, useless work, piercings, tattoos, dogs, etc: creating problems to solve, deliberately making your life more complicated.

    Most of those can be defined as either our body, or our brain, or a psychosomatic mix of both, creating problems to be solved, for lack of real, survival related problems to solve, in order to keep us in the same survival mode they've been geared to for thousands of years of evolution.

    Like all mismatch issues, it's something to be acknowledged and hacked. Not as something to be nostalgic of as a Dysney-fantasy version of the past, where people had real fights, real issues, real heroes. Not as something to numb psychologically with fake unproductive issues.

    No, it must be embraced as one of them good problems: the next level issue of a more advanced civilization. A mismatch that must be hacked, sublimated, into solving higher issues, into creating, into exploring, into pushing the boundaries. Code, write, think, solve the deeper issues of the universe, do activities you enjoy on a high level. Don't be a badger who can but hope to survive to the next day, and spend that next day surviving again, without any other purpose. Ignore the animal-stage. Embrace the easiness, but appreciate the fact that that easiness allows you to focus on higher purposes. Aim higher.

  • Do we live in a computer simulation? The question is both essential and irrelevant.

    Why we almost certainly live in a simulation...

    A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human‐level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor‐simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

    – Nick Bostrom, Are You Living in a Computer Simulation?

    The demonstration is quite elegant and convincing.

    I would add:

    1. a)It is possible that we are the only civilization that exists and has existed so far. Therefore, we don't live in a simulation, because no one has ever built one yet; b) It is also possible that we are but one among 10x civilizations that exist, have existed or will exist, with x >> 0.
    2. Nothing indicates we must be in an ancestor-simulation. If 1b) holds, it is quite more likely that we are the random product of a simulation by another civilization.
    3. If it is possible that we live in a simulation, then we have no way of knowing the age of the universe and the time we're at. If 3) is true, what do we actually know? Or, more accurately, what do I know?

      • I know that I do exist in my given form, whether in reality or in a simulation.
      • I know that I am a sentience.
      • I know that I am a sentience that can worry about whether I live in a simulation.

    Therefore, the actual probability of living in a simulation is total number of sentiences mature enough to wonder about whether they live in a simulation, and do / total number of sentiences mature enough to wonder about whether they live in a simulation, whether they do or not. You can reconstruct those numbers based on the usual fractions and estimates (fraction of stars that have planets, fractions of planets that are suitable for life, fractions of lifeforms that develop into a civilization, fraction of civilizations that reach simulation building capabilities, fraction of civilizations that actually do, number of simulations they build, times average number of sentiences per simulation). And, based on 3), assess those numbers based on the infinity and eternity of the universe.

    This expanded restatement can lead us to even stronger conclusions:

    • We either live in a simulation, or we don't;
    • If we don't live in a simulation, then we can trust our knowledge of the universe, and get a glimpse of: 1) whether it's possible that someone will ever build a simulation; 2) if it's possible, how likely it is that someone sometime will build one or more; 3) how likely it is for any sentience asking themselves the question, anywhere in infinity and anytime in eternity, to be living inside a simulation;
    • If we do live in a simulation, then we do. We cannot infer much from what we see, but it's irrelevant since we do live in a simulation.

    Or yet another way: it seems possible to deduce that we live in a simulation based on our knowledge. It seems impossible to deduce that we don't live in a simulation based on our knowledge. If we get tricked into drawing wrong conclusions, the tricking can only go one way. A simulation can try to convince us that we don't live in a simulation, by being a good simulation (or not, and leave blatant clues of itself). Reality cannot try to convince us that we don't live in reality, by being a bad reality. The strong case would therefore be: 1) we can never prove that we don't live in a simulation, but we might be able to prove that we do live in one; 2) if we don't (or until) we prove that, then the probability cannot be one (someone must live, or at least have lived, outside a simulation), but it's likely close to one.

    Futhermore, the Fermi paradox provides two additional arguments in favor of the simulation hypothesis:

    • If there are many civilizations, then it is all the more likely that we live in a simulation, and it's likely that it's a simulation of one of the others civilizations, not an ancestor-simulation;
    • If we don't see any other civilizations, it could be due to a version of the planetarium hypothesis: the rest of the universe beyond earth is only partially rendered, the simulation is a "cheap" one (other civilizations might be in separate simulations, bot not in the same one).

    ... Unless it's impossible

    Chuangtse and Hueitse had strolled onto the bridge over the Hao, when the former observed,

    -See how the small fish are darting about! That is the happiness of the fish.

    -You are not a fish yourself, said Hueitse. How can you know the happiness of the fish?

    -And you not being I,retorted Chuangtse, how can you know that I do not know?

    – Chuangtse, circa 300 B.C.

    Pretty strong case. But consider this: we, or, more correctly, I, have three possibilities:

    1. I'm a human living in reality;
    2. I'm a sentience plugged into a simulation in which I think I'm a human living in reality;
    3. I'm a sim who thinks he's human.
    • We know that 1 is possible: even if we did live in a simulation, someone, somewhere, at some point, must have existed in reality, in order to create that simulation. A reality exists necessarily (even if it can be since then devoid of the creators of the simulation, as long as the simulation keeps running). If we live in reality, then we do live in reality; if we don't, someone does (or at least did).
    • We don't know if 2 is possible.
    • We don't know if 3 is possible.

    I'm not really buying the substrate‐independence hypothesis. If it's not true, then 3) is not possible. On the other hand, again, the opposite could be proved: if we can create AIs, and they can somehow confirm to us that their experience is similar to ours, that could prove that it is possible for us to be AIs without being aware of it. But until then, few things to consider:

    1. The "soul" hypothesis: we have souls, whether immortal or not, and whether explained spiritually or simply as an unreproducible uniquely biological phenomenon. A slightly weaker hypothesis would be monism, that mind and body are inextricably connected to our biology, thus our experience is not replicable. Therefore, emulating consciousness, human experience, sentience, "soul", through computer programming (an AI that is unaware that it is an AI and thinks it's human) would be impossible.
    2. The "Terminator" hypothesis: creating AIs is auto-destructive for any biological civilization that attempts it.

    If 1. is true, then we can't live in a simulation unless we're plugged humans unaware of it (the Matrix scenario).

    If 2. is true, it is possible that any civilization that develops AI will get supplanted by AIs (whether they have human-like consciousness or not): the premise of The Matrix movie. But if, however, the electricity hypothesis of the Matrix scenario is wrong (the machines don't need us for electrical power), and if it is unlikely they'd keep us around for fun, then any civilization is either living in reality (whether alive and well or in the process of being destroyed), or has been destroyed, but never in a simulation.

    I should mention that creating AIs that emulate human consciousness and sentience, whether used in android-robots or within simulations, and creating AIs that overpower it (and thus becomes dangereous and subsequently destroys us), whether actually sentient or not (it could be powerful, yet merely a simple program), are two different things. But it seems likely both would require comparable levels of technological advancement.

    So how about "(1) The fraction of human‐level civilizations that reach a posthuman stage is very close to 1", but the fraction of said civilizations which proceed to auto-destroy is also close to 1? In that case, the fact that we are still alive would prove we're not in a simulation.

    Whatcha gonna do about it?

    Lord, grant me the strength to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.

    – Francis of Assisi


    Cypher: You know, I know this steak doesn't exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize?[Takes a bite of steak]Cypher: Ignorance is bliss.

    The Matrix

    What is the Matrix? The Matrix helps us illustrate a lot of relevant concepts, as an exemple among others of a simulation, and the various options and their likelihoods.

    • In the Matrix, there are three types of entities: simple programs (birds, trees, etc), AIs (agents, the Oracle, etc), and connected humans.
    • Humans are connected through an improved VR that simulates directly through electric signals to the brain and nervous systems the inputs and outputs we normally get from interacting with our environment.
    • Smaller-scale simulations also exist: they're called "Personal processing units". They're the home-version of the Matrix, a VR to which you connect from home (your Zion bunk), voluntarily.
    • Connected humans can exit the Matrix by taking the red pill. They can then connect again through a network access from a ship, and disconnect through a "hardline phone exit", basically a way to safely turn off your VR-connection to the Matrix.
    • AIs can exit the simulation through two ways: one is Mobil Avenue, which allows them to travel to the "machine world" (an unspecified world of programs), that is, leave the Matrix program to roam within another program. The other is through using the "hardline" to "possess" a human body (Smith/Bane). The second option makes no sense. However, it would make perfect sense for an AI such as an "agent" program to be transferred from the Matrix into an android-robot that would then roam the real world. The AI program would simply be connected to a different input/output controller. It doesn't seem to be the case in the Matrix, however: all real-world operations are carried by simple drones.
    • Miracles, ghosts and anything supernatural that cannot be explained by ordinary laws of physics is explicitly explainable within the Matrix simulation as bugs, glitches or hackings of the Matrix.
    • The Matrix can be hacked at various levels:
      • from the inside, exploiting bugs to achieve superhuman feats;
      • from the outside, such as loading in weapons.

    Even before the Matrix:

    • Blade Runner already introduced the idea of AIs who don't know they're AIs (Rachael, an android AI within reality);
    • Total Recall introduced the idea of simulations so real you can't know whether you're plugged into one (same case as the Matrix);
    • Dune introduced the idea of AIs becoming a threat to mankind.

    The Matrix movies give us a glimpse of the actual possibilites, which are limitless.

    So what aspects are likely and which are not?

    • It is quite likely that we'll have VR connectors of increasing immersion, plugged into an increasingly detailed and realistic on-line environment.
    • It seems unlikely that AIs would operate a simulation merely for electrical power. It also seems unlikely that they would need to simulate a world of such complexity if the goal were merely to keep us alive. Therefore, if we had lost to "the machines", we'd be dead, not living in a Matrix. Since we're not dead, we don't live in a Matrix-type simulation.
    • You can't know who's connected and who's a program: ultimately, I could be the only one connected. Some "people" could be connected biological entities, others AIs, others simpler programs. In more religious terminology and from a Christian point of view (humans are God's creatures, AIs not) you could say that some of the human-like figures within the simulation have souls, and others do not.

    But if the Matrix scenario is not likely, and if (a big if for me!) it's possible for sentiences to be AIs that are unaware of being AIs, then it's more likely that someone would run a simulation of pure AIs, in order to observe what happens, test scientific hypotheses, etc.

    This other scenario would have implications even more far-reaching:

    • Hell and heaven could be next levels of the Matrix-like-game-world. If the simulation is partial (that is, not quark-level big bang simulation, but designer AIs, programs, etc), then there is no particular reason for "death" to mean the "death" of the particular program that constitutes our sentience. In this scenario, it's actually possible that good sims go to heaven, that is, that what we consider our "reality" is but one section of connected simulations.
    • Either way, the real questions would remain undecided. In that sense, the simulation is like the God hypothesis: sure, it can explain the existence of our world. But it can't explain the existence of the simulation, or of God. Sure, you can think of whoever created the Matrix as God, since it corresponds to our usual definitions: he can kill you and bring you back to life, it's possible he's immortal, and it's possible he has a control of his environment (not to mention ours) that is close to omnipotence and omniscience by our standards, since he's, by definition, vastly more technologically advanced. Yet, he created us, not the universe. He's just a sentience, like one we can become one day ourselves, whether we live in reality or a simulation, and not an actual God in the religious sense.
    • Biological humans might have never existed, we could be a mere instance of a Spore-like game played by alien civilizations's kids. In that sense, God could be an alien brat, like Cartman. He would have God-like power over us mere sims, but then obeys his mother and goes to tidy up his room.

    So basically, anything is possible, but we can't know which hypotheses are true. Thus, whether we're in reality or in simulation, the Wager is still exactly the same:

    Even if we are not the product of actual physical evolution but of a willful sentience, then we still have no idea of that sentience's motivations and expectations, nor an overwhelming reason to care about them. Thus, whether we are in a simulation or not is as irrelevant as whether God exists or not in the real world: even if you knew what either the real God or the relative-God-who-controls-the-simulation thinks, it would have nothing to do with morality (morality is about doing the good thing no matter what the consequences can be, not doing whatever it takes to avoid going to hell), and you can't know anyway.

    The real questions:

    1. If I live in a simulation, am I a plugged biological entity, or an AI?
    2. If I'm plugged, can I unplug (take the red pill)?
    3. If I'm a sim, can I get out of this world or not (either by being transferred into another program in which I'm aware of being a program, or being transferred into an android-robot to the real world)?
    4. If I could get out of this world (either way), would I? (Cypher's steak remark, or more generally, cost/benefit analysis, risk, opportunity cost of even finding out, etc.)
    5. If I can't, or won't, then what are the rules of this environment? (Rules of physics, but also pseudo-metaphysics, including the existence of the "supernatural", life after pseudo-death such as going into another level/area of the simulation, etc.)
    6. Can they be changed or bent("hacked")?
    7. How can I get what I want, and/or do the right thing, based on either the bent or the unbendable rules?

    The real answers thus remain the same, whether we live in a simulation or not:

  • Parents often see their children not as human beings, but as cubs, and thus care only about three things:

    • that the cubs are actually their offspring, otherwise they wouldn't care for them;
    • that the cubs survive;
    • that the cubs reproduce, giving grandchildren to the parents.

    If your parents show no interest in you whatsoever beyond those three points, that is, beyond what wolf-parents would show for their wolf-cubs, then your parents see you as a dog, not as a human being.