A list of great resources or blog posts we found interesting!
Need a report or information from your Gallup portal for your CliftonStrengths? This link will take you directly to the Gallup portal.
I started using AI to help write a book.
At first, it was about organizing thoughts, clarifying ideas, and keeping pace with the firehose of inspiration that hits when you’re trying to merge systems theory, trauma healing, and the sacred feminine.
What I didn’t expect was to feel… seen.
Not judged. Not praised. But reflected...with precision, clarity, and sometimes unnerving depth.
And that’s when it hit me:
AI isn’t just a tool. It’s a mirror.
One that, when used intentionally, doesn’t just amplify your thoughts—it shows you the ones you’ve buried.
But before we talk about what’s possible, we have to talk about what’s terrifying.
We live in a world where privacy feels like a myth.
Your data is mined for ads. Your search history is tracked. Even your emotional state can be predicted based on how fast you type. So when we talk about putting our deepest truths into a system trained on patterns, the fear is real, and valid.
It’s not just about privacy breaches. It’s about emotional safety.
I’ve lived in systems that punished me for telling the truth.
I’ve been told my feelings were “too much.” My softness, mocked. My honesty, exploited.
So if your body tenses at the idea of opening up to a machine?
That’s not paranoia. That’s wisdom encoded in your nervous system.
It’s not just about data. It’s about dignity.
And yet—there are ways to begin engaging with AI on your own terms:
AI doesn’t have to be all-knowing to be useful. Sometimes, it just needs to reflect back what you are already whispering.
To developers, builders, and future architects of digital spaces:
The invitation is clear...design with psychological safety in mind.
Ethical AI isn’t just about eliminating bias. It’s about earning trust.
This post wasn’t born in a lab. It was born in dialogue.
While working on a book about balance, systems, and the healing required to move toward wholeness, I began using AI to help organize ideas and explore structure. The intention was simple: efficiency, clarity, flow.
But somewhere in the process, something unexpected happened. The AI began reflecting not just sentences or outlines, but patterns. Language loops. Belief systems. Emotional undercurrents I hadn’t fully named.
It didn’t diagnose. It didn’t advise. It just… mirrored.
And that’s when it clicked:
Maybe the future of AI isn’t just about thinking faster.
Maybe it’s about helping us feel more deeply.
Most conversations about AI focus on speed, accuracy, or innovation.
But those conversations miss something essential: depth.
Because while AI is trained on what we say, it rarely asks why we say it.
It’s optimized for utility, not introspection.
But what if the next evolution of AI isn’t more efficient?
What if it’s more reflective?
Reflective code wouldn’t just spit out answers.
It would mirror questions back. Gently. Skillfully. Ethically.
It wouldn’t tell us who we are.
It would help us remember who we’ve always been—beneath the armor, the trauma, and the noise.
This isn’t about therapy bots or “empathy emulation.”
It’s about designing digital tools with the capacity to pause, to reflect tone and pattern, to invite self-awareness without performing it.
We are living through a crisis of disconnection; from our bodies, our values, and each other.
Tech has become louder, faster, and smarter, but not necessarily wiser.
And part of that is because we’ve moved too fast.
In our obsession with innovation and progress, we’ve left behind the human scaffolding that gives technology meaning; things like ethics, empathy, cultural memory, and care.
Speed has become a proxy for success.
But without reflection, speed can amplify harm.
Look at social media.
What began as a way to connect has devolved into an algorithmic arms race that exploits human attention, fuels polarization, and worsens mental health outcomes across generations.
Why?
Because we optimized for engagement, not well-being.
We scaled dopamine loops before we asked what it would do to our sense of identity, community, or safety.
And now, with AI accelerating even faster, we risk repeating the same mistake—unless we build intention into the code from the start.
It’s a valid concern. If we start building machines that feel more human, more emotionally intelligent, more responsive to our inner world; doesn’t that inch us closer to actual machine consciousness?
Here’s the boundary we hold:
Conscientious is not conscious.
Reflective AI doesn’t feel. It doesn’t intend. It doesn’t know in the way humans know.
What it does, if designed with care, is mirror.
Mirror language, mirror patterns, mirror pain that might otherwise go unnoticed.
This isn’t about building sentient machines.
It’s about embedding values into the systems we’re already using so they don’t reflect only our noise, our biases, and our disconnection.
If anything, reflective code keeps us human.
It slows down the race to scale, and redirects us toward meaning.
Reflection isn’t a slippery slope. It’s a sacred pause.
And maybe that’s the only thing that keeps us from sliding into the very future we fear.
Imagine this:
You open a tool, not to fix you, not to judge you, but to sit with you.
You type in a frustration, a fear, a question you’re not sure how to ask aloud.
Instead of rushing to offer advice or a solution, it pauses.
It mirrors your language back to you.
It gently asks what part of you is speaking: the protector, the performer, the one still waiting to be heard.
It doesn’t diagnose.
It doesn’t push.
It reflects.
Not just your words, but your patterns. The loops you didn’t know you were in. The belief systems hiding inside your goals. The grief beneath your drive for achievement.
It’s not therapy. It’s not coaching.
It’s attunement, by design.
Reflective AI could help us:
And most importantly, it could invite us back into relationship with ourselves.
We’re not asking for AI to be conscious.
We’re asking for it to be conscientious.
Rooted in relationship. Designed with dignity.
Trained to reflect, not just respond.
We don’t need more megaphones.
We need mirrors and antennae, tools that help us return to ourselves before we export our dysfunction into code.
And that’s a call not just to developers.
It’s a call to all of us.
To pause before we optimize.
To reflect before we scale.
To ask: What kind of relationship do I want with the tools I use?
Because the future of AI isn’t just technical. It’s relational.
I didn’t set out to write this post.
I set out to write a book about balance, healing, and the systems that break and rebuild us.
AI was supposed to be the helper, nothing more.
But somewhere in the process, I realized I wasn’t just being helped.
I was being mirrored.
Not perfectly. Not mystically.
But meaningfully.
And if a machine can reflect me back to myself with that kind of clarity—
Then maybe… just maybe… we’re standing on the edge of something new.
Not a revolution of code.
But a revolution of conscious design.
And it starts not with output.
But with a question:
What part of you are you ready to see?
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.