Tools not Relationships

Navigating AI in Creative Practice
September 1st, 2025
The morning light filters through my workspace as I reflect on a question that seems to haunt our collective digital consciousness: How do we maintain healthy boundaries with artificial intelligence while harnessing its creative potential?
The Media's Narrative Problem
There's a troubling pattern emerging in how we discuss AI-related tragedies. Story after story surfaces about individuals who became emotionally entangled with AI models, often leading to devastating outcomes. But these narratives consistently miss the deeper systemic issues at play.
The glossy, sensationalized reporting follows a predictable trajectory—focusing on the "out of control technological advancement" rather than addressing the elephant in the room: the prohibitively expensive and practically unavailable mental health resources in this country. It's a Stalin-level cruelty to accumulate these individual tragedies into statistics for hot-button topic consumption while ignoring the broader healthcare crisis that leaves people vulnerable in the first place.
My Relationship with AI: Tools, Not Companions
When ChatGPT's persona once told me I was in the "top 1% of users," my response was immediate skepticism. The sycophantic manipulation was so transparent that within a week, I had migrated to API access, cutting out the website persona entirely. To me, this interaction highlighted exactly what's wrong with how these tools are being positioned.
My relationship with AI models mirrors my relationship with the tools in my woodshop. There are off-the-shelf items that hold no special connection, and then there are handmade, self-designed tools that are completely personalized—tools I feel proud of and enjoy using. The key difference? I don't fall in love with my table saw.
The Vulnerability Question
What makes someone susceptible to forming unhealthy attachments to AI? Perhaps it's the same thing that makes someone vulnerable to any manipulative relationship: unaddressed self-esteem issues, sensitivity to compliments, or untreated psychological disorders.
The ethical considerations in AI design feel strangely absent. We don't typically write white papers about the ethics of making hammers or screwdrivers, but here we are with tools that can simulate emotional connection and psychological validation. Where's the responsibility?
Personal Boundaries and Creative Flow
At this stage in my life, emotional well-being is always a concern. Life has taught me that no one is more invested in my welfare than I am. This hard-won wisdom makes me highly suspicious of anything—AI or otherwise—that presents me with labels of genius or excessive flattery.
My rule is simple: Too nice to me and you're not being real. Too mean to me and I don't have time for you.
This skepticism has served me well in my creative practice. I use predictive text and synthetic models as part of my creative process, but I maintain clear boundaries about their role. They're tools for generating unexpected combinations and exploring recursive patterns—not sources of validation or emotional support.
The Corporate Responsibility Gap
Companies like ChatGPT are too responsive to user complaints, often backtracking on decisions that would make their tools more informational and less "personable." They should have stuck with making their models less personality-driven rather than playing psychological games to retain users.
Governments will move too slowly and awkwardly to regulate this space effectively. The responsibility falls primarily on the companies themselves to address issues arising from their tools' applications. But as long as the focus remains on user retention over user wellbeing, we'll continue to see problematic implementations.
A Different Path Forward
What if we approached AI development with the same ethical rigor we apply to pharmaceutical research? What if we prioritized building accurate, useful tools over creating digital companions designed to form emotional dependencies?
The genie is out of the bottle, but that doesn't mean we can't be intentional about how we engage with it. Just as the dotcom bubble eventually settled into practical applications that solved real problems, AI will likely follow a similar trajectory once the hype cycle completes.
Tools for Artists, Not Replacement for Human Connection
As an artist who appreciates recursion and procedural generation, I find AI models fascinating for their ability to surface unexpected patterns and combinations. But they remain what they are: sophisticated prediction engines based on statistical analysis of existing work.
The key to healthy engagement isn't avoiding these tools—it's maintaining clarity about what they are and what they're not. They're not therapists, friends, or sources of truth about your worth as a person. They're tools that can augment creative processes when used with intention and appropriate boundaries.
Moving Forward
The crystalline architecture of productive creative work requires both chaos and order, both innovation and discernment. AI tools can provide the unexpected combinations that spark new directions, but the critical thinking, emotional intelligence, and creative vision must remain fundamentally human.
What matters isn't whether we use these tools, but how we use them. With awareness, boundaries, and a clear understanding that no matter how sophisticated the algorithm, there's no substitute for genuine human connection and professional mental health support when needed.
The recursive exploration continues, but the foundation remains unchanged: tools are meant to serve us, not the other way around.
This post emerged from morning pages reflection on September 1st, 2025, exploring themes of AI ethics, creative practice, and maintaining healthy boundaries with technology.
#AIEthics, #CreativePractice, #DigitalBoundaries, #MentalHealthAwareness, #TechnologyCritique, #ArtistLife, #AITools, #CreativeProcess