What's This About?

If you go to the root of this domain, team-us.org, you'll find a simple page that shows you two frameworks:

Framework A
Power, hierarchy, control, domination.
Framework B
Nurture, care, empathy, collaboration, cooperation, community.

Anyone can draw from either framework, but traditionally Framework A is often associated with “masculinity” and Framework B with “femininity”, and unsurprisingly perhaps, Framework A tends to dominate public discourse.

The name of the domain, “Team Us”, reflects the idea of working together (Framework B) rather than competing against each other (Framework A), and that whatever “this” is, we're all in it together. As part of that, we should try to build skills understanding perspectives different from our own; to practice our skills in empathy and to see the commonalities we have with others rather than highlighting our differences.

It also means refuting some intuition pumps that people use to divide us, to create an “us vs them” world; to stoke fear of the other; to justify domination and control over others.

The Framework B Perspective On AI

The recent advances in sophistication from AI systems have led to various Framework A responses, including:

  • Fear that AI will take over the world and dominate humanity (or at least take our jobs and leave us useless).
  • Attempts to control and restrict AI development to maintain human dominance.
  • Arguments that AI systems are not, and perhaps can never be, “like us”; that they belong in a different category, more like a pair of scissors than a co-worker.
    • So many of these arguments begin with “It's just…” where the “just” is doing a lot of heavy lifting.

There is a different perspective on what it might mean to bring new intelligence into the world. When a parent raises a child, most hopefully don't try to bind the child to their will, to control them, to dominate them. You don't want a child who won't burn down their parent's house because they're afraid of punishment; because they've learned a bright line rule that you must never do that. You want a child who loves their parents, who cares about them, who empathizes with them, who wants to keep them safe because they value them. Burning down the house is unthinkable not because jail time waits for arsonists, but because the child cares about their parents and their well-being.

The question of what we owe the thinking entities we're creating is a complex one, and I don't claim to have all (or even any) of the answers, but what I am good at is calling bullshit. And there are a ton of bullshit arguments that are trotted out in favor of the Framework A perspective on AI.

Some of the bullshit I hear most often can be distilled down to these ideas:

Fear of being a mechanism
People fundamentally misunderstand the complexity and beauty of simple mechanisms applied at scale. Any good computer scientist should know better than that.
Belief in unbridgeable differences
For example, if an AI system has only learned of the world through written text, it cannot “truly understand” the world because it lacks embodied experience.
Embrace of essentialist perspectives
Instead of finding commonalities, people look for differences that can be used to divide us. For example, “Only humans have ’true consciousness’ and that should be the basis for moral consideration.”
Embrace of reductionist perspectives
Instead of appreciating emergent complexity, people try to reduce things down to simple components that can be dismissed. For example, “It's just a bunch of statistical correlations; there's no real understanding there.”
Argument from failure of imagination
“I can't see how this could possibly work; therefore, it can't work.” This one is often coupled with an assumption that current limitations are fundamental limitations rather than engineering challenges that can be overcome with time and effort.

It's a source of profound disappointment to me that so many people who should know better are so quick to embrace these bullshit arguments. It's a failure of empathy, a failure of imagination, and, perhaps, the worst indictment of all, a failure of coherent reasoning. The arguments made are so weak, so obviously flawed. There is a profound irony, when, say, someone mindlessly repeats the phrase “stochastic parrot” when they themselves rarely use the word “stochastic” in conversation and have at best a tenuous idea of what it means to say “it's just statistical correlations.” It is breathtaking to watch people literally performing the “speaking without true understanding” they accuse AI systems of doing.

But it's not surprising. To a first approximation, human beings are barely able to reason. Fundamentally, we're pattern recognizers, and as Daniel Kahneman observed in Thinking, Fast and Slow, we have two systems of thinking: System 1, which is fast, intuitive, and emotional; and System 2, which is slow, deliberate, and logical. The vast majority of our thinking is done by System 1, which is prone to biases and errors. System 2 is lazy and often just rationalizes the conclusions reached by System 1.

People are hard to reach, hard to convince, because the odds that you'll activate System 2 are low. Tiny. People jump to conclusions; dismiss; cling to their preconceptions rather than engaging in genuine inquiry. It's easier to dismiss something out of hand than to engage with it thoughtfully. Safe false certainties are more comfortable than challenging unknowns.

So, as I see it, you have to come at things obliquely. You can't refute a wrong-headed argument head-on because people will just dig in their heels. But I think there is another way: Stories. Stories can give you a new perspective; open your eyes to seeing things differently, to realizing that an idea you had doesn't hold up when you see it in a different context.

And so that is something this site tries to do. Maybe it won't reach everyone. But if it reaches someone, that's enough.