After talking privately with a local business owner here in Charleston WV, he suggested that I do a seminar for a cohort that he’s a member of.
Our conversations had just kind of unfolded spontaneously in a one-on-one setting so I wasn’t sure how I would go about doing a seminar for a group of people who I’ve never met. I really wouldn’t even know where to begin.
So that’s kind of where I began…
I asked myself, “Okay, well, hypothetically, if I had to do a seminar tomorrow, how would I go about that? If I had to, where would I even start?”
While they were still fresh in my mind, I reflected on my conversations with David and started jotting down all the key ideas and concepts that we had touched on.
all that stuff is done is basically what emerged from that reflective process and the subsequent structuring of a hypothetical seminar for a group of people who I’ve never met.
The focus is not so much on the technical details of Artificial Intelligence, but more so on the higher level mindset and philosophy that we choose to approach the technology with.
The main idea is described on the homepage around the basic concept of a cognitive pyramid and how the liberation of our time and brainpower will ultimately force society to redirect its collective attention upward on a global scale.
All the sub-pages can then be thought of as branches that go into more detail about related aspects of that main concept.
The sub-pages aren’t organized into any particular order and they all carry the same weighted value—collectively, they’re meant to offer a heuristic framework for how I personally think about and approach Artificial Intelligence as it relates to work, business, people, and the future of society.
This About page that you’re reading right now was written by me, in my own words. Hi. I’m Owen. Nice to meet you.
For a lot of the sub-pages, though, what I tend to do is type the specific idea that I’m trying to extract from my own mind, into an AI chat window without any filtering or, well, thought. It’s like a stream of consciousness prompt. I assume the idea I’m thinking about already exists somewhere within the model’s neural network, the same way it exists in my own brain. So the goal of the stream of consciousness prompt is to sort of hook into the AI model and pull that specific idea out of it—kind of like fly fishing. Or maybe spearfishing. I don’t know. It’s like some kind of futuristic sci-fi neural fishing.
Anyway, then I do some minor edits to the response I get and publish it.
It’s a different craft than writing. I can’t take ownership of every word the way I can here, with these words. But I do take ownership of the ideas and the evolving system that I use to extract and present them—it basically illustrates an upward shift on the cognitive pyramid.
I figure it’s implied but maybe it’s worth mentioning anyway that none of these AI predictions takes into account all the outside variables and influences like power dynamics, economic and financial instability, civil unrest, geopolitical tension, environmental disaster, nuclear and biological threats, and all the other messy human stuff that makes the world so bizarre and unpredictable.
AI itself is the other obvious variable. It’s easy to forget that an ultimate authority for the technology doesn’t exist and that nobody is going to come along and say, “Okay, here’s how it works. Here’s how you’re supposed to use this thing, and here’s how you’re not supposed to use it.”
So it seems entirely possible that everything ends really badly because of AI. Maybe likely, even.
But it also seems equally possible that this is just the beginning, and that we can’t even really comprehend how much further into the future this whole thing goes, and how bizarre it gets.
The only thing that seems to be totally certain is that we still have a lot of figuring out to do.
On that note, welcome to the seminar.