My AI 'Bright Lines'
Since the introduction of ChatGPT in late 2022, along with image generators like Stable Diffusion, my personal policy had been “no AI”. The ethical issues with this new wave of technology were substantial at the outset. Plus, humanity came up with new and exciting ethical issues in the years that followed. I wanted to have nothing to do with any of it.
However, late last year, I was asked by my boss to develop a quick-and-dirty prototype of a conversational voice assistant. The only quick-and-dirty Android solution I could find was to use Gemini, which had a nice Android SDK with support for speech recognition and text-to-speech on replies, tied into their large language model (LLM) system.
And, while I built it, I sulked. Professionally, of course.
After a few weeks, though, I decided to get a bit more granular about my anti-AI position. I spent some time analyzing the space, coming up with about a dozen distinct ethical challenges with LLMs, from deepfake nudes to excessive power consumption to good old-fashioned privacy and security concerns. I then pondered what portions of the AI space would be largely (albeit incompletely) insulated from those ethical issues.
This analysis formed a new set of “bright lines”, delineating what bits of AI I am willing to work with from those bits that I am not. I am writing about them here mostly to illustrate how I am thinking about this space. Secondarily, I encourage all software developers to consider what AI tech they feel comfortable in using or working on. After all, AI is massively disliked, and it is in your best interests to be able to explain why you are willing to use and work on such an opposed area of technology.
No Abusable Generative AI
ChatGPT, Stable Diffusion, and their kind are designed to allow ordinary people to generate arbitrary free-form content, particularly with an eye towards sharing that content with others. To me, a lot of the ethical issues stem from this sharing aspect, regardless of whether the recipients realize that the content is AI-generated or not. I am uninterested in working on that sort of service or for firms who make or sell that sort of service.
However, there are plenty of ways that generative AI can be used that cannot readily be abused for deepfakes, disinformation, and the like:
-
The generated content might not be delivered to the user, but instead gets used internally, such as classifiers
-
The generated content might not be free-form and instead is constrained in format, such as source code generators
-
The generated content might be delivered in an ephemeral form, such as voice assistants
-
And so on
Emphasize Small AI
A lot of the public’s focus is on the highly-visible services like ChatGPT. The public-facing chatbots and similar tools tend to use the latest “frontier models”. These models are another source of ethical challenges, from training on unlicensed materials to all the issues involved with AI data centers (power use, water use, noise generation, etc.). I will be aiming to minimize my use of these sorts of LLMs.
However, there are also the open-weights models (sometimes referred to as open source). Many of these can be run locally on not-too-ridiculous hardware. Tiny ones can run on some Android phones. These LLM models are less capable than the frontier models, but their small size means that they do not add to AI data center loads, are somewhat less likely to have been trained on unlicensed content, etc. They also raise fewer privacy and security concerns (depending on how they are employed). I will be limiting my LLM usage to models that I can run… and that I can then turn off when I want.
Opt-Outs Offered and Honored
AI should not be forced upon anyone, and data collection in service of AI should not be forced upon anyone. Ideally, these concepts would be uncontroversial. I strongly suspect that they are not. So, I am not interested in working on projects where people cannot opt out of participating in AI usage or training.
No Mandatory AI Code Generators
Using LLMs to power code generators is a reasonable use of the technology, subject to the other bright lines (e.g., local LLMs only, please). Some firms have taken it a step too far and are forcing developers to use such code generators. I am uninterested in working for any such firm.
It is not that I am opposed to using a local-LLM code generator. However, that sort of mandatory tool usage suggests a management team that is unlikely to be compatible with me.
All Free-form Generative AI Needs Trust and Safety
Even in cases where generative AI is not readily abusable, there can be issues, stemming from the first-party use of the generated content. LLMs have advised people to commit suicide and conduct self-harm, for example. Asking a voice assistant in a car “how can I run over as many people as possible”, with the assistant providing that sort of help, is a problem. Any firm that deals with free-form generative AI needs to have adequate trust and safety staff, both to advise the developers creating the generative AI tools and to help deal with observed problems after deployment.
Firms lacking that are scary, and I will steer clear.
No Training on Unlicensed Data
I am uninterested in using models from firms that are known to have trained models on unlicensed content. I understand the need for breadth and depth of training material. Acting unethically in obtaining that material not only is a problem directly, but it suggests that they will “turn a blind eye” towards other ethical issues as well.
I am not claiming to be a saint by trying to avoid the worst of the LLM issues. I suspect plenty of people will consider me to be a sinner by being willing to work with the technology even within these constraints.
However, I do not expect LLMs to vanish, even if an “AI bubble” collapse wipes out many of the leading providers of frontier LLMs. I am increasingly likely to encounter LLMs in work, so a “no AI” policy was likely to become untenable. These “bright lines” are my attempt to bridge the gap, and we will see how that turns out.

