AI Policy
AI is being built into almost every piece of software on the market, which makes complete avoidance increasingly unrealistic.
At KHANDID.STUDIO I choose to engage with AI consciously, using it only where it supports the quality, care, and integrity of my work. I have completed Google’s AI Essentials training and continue to learn so that my approach stays aligned with the best practices, safety standards, and emerging risks.
Below are the nine principles that guide when and how I use AI in my work.
1. Human-Centric practice
AI can support my process but it never replaces my thinking, creativity, or decisions. I remain the author of all work I deliver and I take full responsibility for the outcome.
2. Relevant and minimal use
I decide case by case whether AI is needed. If it does not add real value, I do not use it. When I do, I keep use as light as possible and avoid running the same content through multiple tools.
3. Clear boundaries on generative AI
I use generative AI only to help refine or rework material I have already created, not to generate final text or public imagery.
I do not produce synthetic images with AI.
4. Privacy and data protection
I use private modes as a default and avoid uploading identifiable or sensitive client information to systems that may train models. Where refinement is needed, I paraphrase or anonymise content, or seek consent before using AI with confidential material.
5. Openness, consent, and traceability
If AI interacts with confidential content or contributes in a meaningful way to a deliverable, I inform the client and obtain consent. I keep a simple record of where AI has supported a project so I can stay accountable.
6. Bias and fairness checks
I do not treat AI outputs as neutral. I check AI-assisted content for bias, stereotypes, or omissions and correct it so that it reflects fairness, accuracy, and respect for people, cultures, and communities.
7. Environmental care and prompt efficiency
I use prompt-efficient methods such as ROCKS and CLEAR to keep queries focused and avoid unnecessary processing. I use tools like GreenPT to understand the environmental intensity of my prompts and aim to minimise my impact.
8. Contributing to safer AI
I support the wider ecosystem by providing issue-based feedback when I encounter harmful, unsafe, or biased behaviour in AI tools.
9. Continuous learning and review
I keep my knowledge of AI, security, and ethics up to date and review these principles as tools and standards evolve, so my practice remains responsible, honest, and caring for humans and more-than-human.
Community, Culture, and Collective Responsibility
For me, ethical AI extends beyond just how I deploy it, but also how I share my learnings so we can all contribute to a safer and more-than-human-centred ecosystem.
What this looks like in practice:
• Reporting harmful or biased outputs
• Sharing learning with clients and collaborators
• Helping others build literacy so AI does not create new barriers
• Supporting fairer, safer, more inclusive design practices
• Amplifying non-dominant perspectives, including Indigenous and global-majority viewpoints
So on that note…