When the smartphone revolution began about 20 years ago you knew when you were using a smartphone or not. You knew when you were sitting down at a computer or not, when you were opening up a social media app or not, etc.
I think the big difference here is that AI is everywhere and in everything, almost without user consent. No industry is safe. Education isn't safe. Childhood isn't safe. Religious communities aren't safe. Text exchanges with family members aren't safe.
For months now I've recognized the need to establish a set of personal values and safeguards around AI. These apply primarily to me and in my domains of oversight. But also they will shape who and what I engage with and consume from.
In many ways I think this will be the issue of our time. What does it mean to be a human? Is there value in creating or only in the completed product? What do we gain from the struggle of the creative process?
Also I see opportunity everywhere.
As generative AI overtakes and as we realize that we can't trust anything that comes through a screen, even, soon, the person on the other side of a Zoom video chat (it could be their AI avatar authorized to speak on their behalf), then real life and real world interactions become so much more poignant and beautiful.
Right now I lead a community writing workshop on weekly basis. IT IS REAL. No one is using AI. We write together with pens in notebooks. We read our work aloud together. This will remain a safe space.
I can see other safe spaces springing up too. For instance: we gather and paint or make art, together and in realtime. Then we walk next door and hang our art immediately in a gallery and have a show. It's real. It's human. And we can trust it.
Also I see communities forming of people who choose to opt out of the generative AI devolution. There's a lot of thoughtful writing about this out there already.