Well, after a one-day delay,
the rest of this wave’s artifacts made it into Gradle!
Five brand-new artifacts showed up, four for Tracing and one for XR:
androidx.tracing:tracing-desktop
androidx.tracing:tracing-wire
androidx.tracing:tracing-wire-android
androidx.tracing:tracing-wire-desktop
androidx.xr.projected:projected-binding
The roster of 700+ updated artifacts can be found here!
—Jan 29, 2026
While Google claims a lot of stuff was released,
somebody forgot to put them into Maven, as maven.google.com does not know about them. 🤷🏻
What we do have is a patch release to Media3:
androidx.media3:media3-cast:1.9.1
androidx.media3:media3-common:1.9.1
androidx.media3:media3-common-ktx:1.9.1
androidx.media3:media3-container:1.9.1
androidx.media3:media3-database:1.9.1
androidx.media3:media3-datasource:1.9.1
androidx.media3:media3-datasource-cronet:1.9.1
androidx.media3:media3-datasource-okhttp:1.9.1
androidx.media3:media3-datasource-rtmp:1.9.1
androidx.media3:media3-decoder:1.9.1
androidx.media3:media3-effect:1.9.1
androidx.media3:media3-exoplayer:1.9.1
androidx.media3:media3-exoplayer-dash:1.9.1
androidx.media3:media3-exoplayer-hls:1.9.1
androidx.media3:media3-exoplayer-ima:1.9.1
androidx.media3:media3-exoplayer-midi:1.9.1
androidx.media3:media3-exoplayer-rtsp:1.9.1
androidx.media3:media3-exoplayer-smoothstreaming:1.9.1
androidx.media3:media3-exoplayer-workmanager:1.9.1
androidx.media3:media3-extractor:1.9.1
androidx.media3:media3-inspector:1.9.1
androidx.media3:media3-muxer:1.9.1
androidx.media3:media3-session:1.9.1
androidx.media3:media3-test-utils:1.9.1
androidx.media3:media3-test-utils-robolectric:1.9.1
androidx.media3:media3-transformer:1.9.1
androidx.media3:media3-ui:1.9.1
androidx.media3:media3-ui-compose:1.9.1
androidx.media3:media3-ui-compose-material3:1.9.1
androidx.media3:media3-ui-leanback:1.9.1
—Jan 28, 2026
Since the introduction of ChatGPT in late 2022, along with image generators like Stable Diffusion,
my personal policy had been “no AI”. The ethical issues with this new wave of technology were
substantial at the outset. Plus, humanity came up with new and exciting ethical issues in the
years that followed. I wanted to have nothing to do with any of it.
However, late last year, I was asked by my boss to develop a quick-and-dirty prototype of a
conversational voice assistant. The only quick-and-dirty Android solution I could find was to
use Gemini, which had a nice Android SDK with support for speech recognition and text-to-speech
on replies, tied into their large language model (LLM) system.
And, while I built it, I sulked. Professionally, of course.
After a few weeks, though, I decided to get a bit more granular about my anti-AI position. I
spent some time analyzing the space, coming up with about a dozen distinct ethical challenges
with LLMs, from deepfake nudes to excessive power consumption to good old-fashioned privacy and
security concerns. I then pondered what portions of
the AI space would be largely (albeit incompletely) insulated from those ethical issues.
This analysis formed a new set of “bright lines”, delineating what bits of AI I am willing to
work with from those bits that I am not. I am writing about them here mostly to illustrate how
I am thinking about this space. Secondarily, I encourage all software developers to consider
what AI tech they feel comfortable in using or working on. After all, AI is massively disliked,
and it is in your best interests to be able to explain why you are willing to use and work on
such an opposed area of technology.
No Abusable Generative AI
ChatGPT, Stable Diffusion, and their kind are designed to allow ordinary people to generate
arbitrary free-form content, particularly with an eye towards sharing that content with others.
To me, a lot of the ethical issues stem from this sharing aspect, regardless of whether the
recipients realize that the content is AI-generated or not. I am uninterested in working on that
sort of service or for firms who make or sell that sort of service.
However, there are plenty of ways that generative AI can be used that cannot readily be abused
for deepfakes, disinformation, and the like:
-
The generated content might not be delivered to the user, but instead gets used internally,
such as classifiers
-
The generated content might not be free-form and instead is constrained in format, such as
source code generators
-
The generated content might be delivered in an ephemeral form, such as voice assistants
-
And so on
Emphasize Small AI
A lot of the public’s focus is on the highly-visible services like ChatGPT. The public-facing
chatbots and similar tools tend to use the latest “frontier models”. These models are another
source of ethical challenges, from training on unlicensed materials to all the issues involved
with AI data centers (power use, water use, noise generation, etc.). I will be aiming to minimize
my use of these sorts of LLMs.
However, there are also the open-weights models (sometimes referred to as open source).
Many of these can be run locally on not-too-ridiculous hardware. Tiny ones can run on some Android phones.
These LLM models are less capable than the frontier models, but their small size means that
they do not add to AI data center loads, are somewhat less likely to have been trained on
unlicensed content, etc. They also raise fewer privacy and security concerns (depending on how
they are employed). I will be limiting my LLM usage to models that I can run… and that I can
then turn off when I want.
Opt-Outs Offered and Honored
AI should not be forced upon anyone, and data collection in service of AI should not be forced
upon anyone. Ideally, these concepts would be uncontroversial. I strongly suspect that they are not.
So, I am not interested in working on projects where people cannot opt out of participating in
AI usage or training.
No Mandatory AI Code Generators
Using LLMs to power code generators is a reasonable use of the technology, subject to the other
bright lines (e.g., local LLMs only, please). Some firms have taken it a step too far and are forcing
developers to use such code generators. I am uninterested in working for any such firm.
It is not that I am opposed to using a local-LLM code generator.
However, that sort of mandatory tool usage suggests a management team that is unlikely to be compatible
with me.
Even in cases where generative AI is not readily abusable, there can be issues, stemming from the
first-party use of the generated content. LLMs have advised people to commit suicide and conduct
self-harm, for example. Asking a voice assistant in a car “how can I run over as many people as
possible”, with the assistant providing that sort of help, is a problem. Any firm that deals with
free-form generative AI needs to have adequate trust and safety staff, both to advise the developers
creating the generative AI tools and to help deal with observed problems after deployment.
Firms lacking that are scary, and I will steer clear.
No Training on Unlicensed Data
I am uninterested in using models from firms that are known to have trained models on unlicensed
content. I understand the need for breadth and depth of training material. Acting unethically
in obtaining that material not only is a problem directly, but it suggests that they will
“turn a blind eye” towards other ethical issues as well.
I am not claiming to be a saint by trying to avoid the worst of the LLM issues. I suspect plenty
of people will consider me to be a sinner by being willing to work with the technology even within
these constraints.
However, I do not expect LLMs to vanish, even if an “AI bubble” collapse wipes
out many of the leading providers of frontier LLMs. I am increasingly likely to encounter LLMs in work, so
a “no AI” policy was likely to become untenable. These “bright lines” are my attempt to bridge
the gap, and we will see how that turns out.
—Jan 24, 2026
Hey! It’s 2026! 🎉 And with that comes our first batch of Jetpack artifacts!
We got a brand new artifact group, androidx.glance.wear, with a pair of artifacts:
androidx.glance.wear:wear
androidx.glance.wear:wear-core
Also, androidx.core:core-pip is a new artifact.
The roster of 500+ updated artifacts can be found here!
—Jan 14, 2026
We have a stable Media3 1.9.0 release as the headliner of this week’s updates:
androidx.gradle:gradle-version-catalog:2025.12.01
androidx.gradle:gradle-version-catalog-alpha:2025.12.01
androidx.gradle:gradle-version-catalog-beta:2025.12.01
androidx.media3:media3-cast:1.9.0
androidx.media3:media3-common:1.9.0
androidx.media3:media3-common-ktx:1.9.0
androidx.media3:media3-container:1.9.0
androidx.media3:media3-database:1.9.0
androidx.media3:media3-datasource:1.9.0
androidx.media3:media3-datasource-cronet:1.9.0
androidx.media3:media3-datasource-okhttp:1.9.0
androidx.media3:media3-datasource-rtmp:1.9.0
androidx.media3:media3-decoder:1.9.0
androidx.media3:media3-effect:1.9.0
androidx.media3:media3-exoplayer:1.9.0
androidx.media3:media3-exoplayer-dash:1.9.0
androidx.media3:media3-exoplayer-hls:1.9.0
androidx.media3:media3-exoplayer-ima:1.9.0
androidx.media3:media3-exoplayer-midi:1.9.0
androidx.media3:media3-exoplayer-rtsp:1.9.0
androidx.media3:media3-exoplayer-smoothstreaming:1.9.0
androidx.media3:media3-exoplayer-workmanager:1.9.0
androidx.media3:media3-extractor:1.9.0
androidx.media3:media3-inspector:1.9.0
androidx.media3:media3-muxer:1.9.0
androidx.media3:media3-session:1.9.0
androidx.media3:media3-test-utils:1.9.0
androidx.media3:media3-test-utils-robolectric:1.9.0
androidx.media3:media3-transformer:1.9.0
androidx.media3:media3-ui:1.9.0
androidx.media3:media3-ui-compose:1.9.0
androidx.media3:media3-ui-compose-material3:1.9.0
androidx.media3:media3-ui-leanback:1.9.0
—Dec 24, 2025