The CommonsBlog
Random Musings on the Android 13 Developer Beta 1
Each time Google releases a new developer preview, I rummage through
the API differences report
the high-level overviews,
and even the release blog post,
to see if there are things that warrant more attention from
developers. I try to emphasize mainstream features that any developer
might reasonably use, along with things that may not
get quite as much attention, because they are buried in the JavaDocs.
Once we get to beta releases, changes to the API surface should start to diminish,
and Android 13 Beta 1 is no exception. The API differences report is a fraction
of what came in the two developer previews, and even those seemed smaller than in past
years.
What Will Break You, Eventually
READ_EXTERNAL_STORAGE
effectively is deprecated. Once your targetSdkVersion
hits
33
(for most developers, in 2023), you will need to stop requesting READ_EXTERNAL_STORAGE
and start requesting one or more of:
READ_MEDIA_AUDIO
READ_MEDIA_IMAGES
READ_MEDIA_VIDEO
Those will affect your ability to read from the standard shared media collections. For other
types of content, use the Storage Access Framework.
What May Break You, Sooner
Mishaal Rahman of Esper wrote this week about predictive back navigation.
(IMHO, “predictive” often means “royally screwed up”)
Mishaal goes into a lot of detail,
but the upshot is that it appears that Google wants to use animations to help indicate
to a user when a system BACK navigation gesture will send the user to the home screen versus
doing something else. If you manage your own BACK navigation, such as by overriding
onBackPressed()
somewhere, you may need to migrate to the new OnBackInvokedDispatcher
approach, and you may need to fiddle with android:enableOnBackInvokedCallback
if you find that “predictive back navigation” breaks things.
As Mishaal notes, hopefully this Google I|O session
will clarify things.
BTW, note that OnBackInvokedDispatcher
moved from android.view
to android.window
in Beta 1.
What Else Changed?
There is an option now to listen to when the keyguard comes and goes.
This requires a new SUSBSCRIBE_TO_KEYGUARD_LOCKED_STATE
permission. However, this is designed solely for use by ROLE_ASSISTANT
apps, so it
will not be available to many developers.
Some notable things were deprecated:
The mysterious SPLASH_SCREEN_STYLE_EMPTY
value was
renamed to SPLASH_SCREEN_STYLE_SOLID_COLOR
and appears to give you a way of opting out of having an icon on the mandatory splash screen.
Finally, if you have been using the force-dark options on WebSettings
, those were
deprecated and replaced by “algorithmic darkening allowed” methods,
because those names just roll off the tongue.
What Comes Next?
We are slated to get three more beta releases. I expect there to be few API changes.
If that turns out to be true, most likely this will be the last “random musings”
post for the Android 13 cycle.
The final release date is murky, as usual, but probably is in the August/September
timeframe. Be sure to budget time in May/June (if not sooner) to start playing with Android 13 and
testing your app’s compatibility with it.
—Apr 30, 2022
Random Musings on the Android 13 Developer Preview 2
Each time Google releases a new developer preview, I rummage through
the API differences report
the high-level overviews,
and even the release blog post,
to see if there are things that warrant more attention from
developers. I try to emphasize mainstream features that any developer
might reasonably use, along with things that may not
get quite as much attention, because they are buried in the JavaDocs.
What Got Clarified From Last Time
About five weeks ago, I wrote about DP1.
This time around, the “13 DP1 while 12L is in beta?” answer is “12L is now folded
into DP2”.
Also, they are now
documenting the POST_NOTIFICATIONS
permission.
Of particular note is that this permission affects all apps, regardless of
targetSdkVersion
. If your targetSdkVersion
is below Android 13’s presumed
value of 33
, the system will prompt the user to grant permission when you create
your first notification channel, for a newly-installed app on 13.
If you create that channel at an inopportune time… you will
need to modify your app.
They are also documenting the new option for controlling whether a dynamically-registered receiver is exported.
And, they mentioned in the blog post the option for revoking already-granted
permissions, though the method name changed.
Plus, they talk a bit about BODY_SENSORS_BACKGROUND
.
But beyond that, the mysteries from DP1 remain mysteries.
What Else Got Announced Of Note?
The biggest thing is the Foreground Services (FGS) Task Manager. This allows
users to stop your app’s process easily from the notification shade, if your
app has a foreground service running. Of particular note is that the OS will
nag users periodically about your app, if your service runs most of the time
(20 hours out of the preceding 24, with a maximum of one nag per month).
Developers may wind up adding flags
to avoid users getting bothered by those messages, which in turn will cause
Google to remove the impacts of those flags in some future Android release.
(if you spend enough time in Android development, predicting developer-and-Google
actions in advance becomes simply a matter of pattern matching…)
The War on Background Processing
continues, beyond the FGS Task Manager.
In a tweak to JobScheduler
, “In Android 13, the system now tries to determine the next time an app will be launched”,
which is not at all creepy. Nope, not one bit.
The official blog post mentions a few things that might impact a small
percentage of developers.
And, that’s pretty much it for official stuff.
What’s Up With All the New Permissions?
DP2 adds 15 new permissions over DP1,
let alone prior versions of Android.
Some of these, like MANAGE_WIFI_AUTO_JOIN
and MANAEG_WIFI_INTERFACES
, are
documented as “Not for use by third-party applications”, which makes you wonder
why they bothered to put them in an Android SDK that is explicitly for third-party applications.
They have added three content-specific permissions: READ_MEDIA_AUDIO
,
READ_MEDIA_IMAGE
, and READ_MEDIA_VIDEO
. The JavaDocs indicate that these
are replacements for READ_EXTERNAL_STORAGE
, but only for those app that target
Android 13 or higher. Presumably, holding one of these permissions, and not READ_EXTERNAL_STORAGE
,
only gives you read access to that media type and not other content.
The other permission that may generate widespread interest is USE_EXACT_ALARM
.
Android 12 added SCHEDULE_EXACT_ALARM
to be able to use exact alarms, but this
is an “app ops” permission, one that users have to grant directly in Settings.
USE_EXACT_ALARM
appears to be a normal
permission, but the JavaDocs make it
plain that the Play Store (and perhaps elsewhere) will require you to fill out a form
to be able to ship an app that requests it. As many developers are discovering
with MANAGE_EXTERNAL_STORAGE
, you need a really good reason to request one of these
sorts of permissions.
What Is Up In the Clouds
Last time, I mentioned ACTION_PICK_IMAGES
and how it may be backed by CloudMediaProvider
objects.
It appears that this might extend beyond images someday, as there are
new interfaces tied to CloudMediaProvider
rendering previews on a supplied Surface
It’s possible that this is just for animated images, but my guess is that there
may be more coming in this area, if not in Android 13 then in future versions
of Android.
What Makes Me Want to Change the Channel
There are a ton of new and changed classes
in android.media.tv
,
which pertains to actual TV channel playback on an Android TV device.
And, there are
changes to the new android.media.tv.interactive
package,
including a new TvInteractiveAppService
.
That is described as “a service that provides runtime environment and runs TV interactive applications”.
It is not completely clear what “TV interactive applications” are that are somehow
different from “applications that run on Android TV”.
What Brings to Mind a Beatles Song
Activities, fragments, and dialogs all now implement OnBackInvokedDispatcherOwner
.
From these, you can get an OnBackInvokedDispatcher
and use that to register an OnBackInvokedCallback
,
allowing you to get back.
What Makes Me Wonder If These Supplements Are FDA-Approved
A process can be supplemental.
It is unclear what this means.
There is also a new supplementalapi
package,
containing an AdServicesVersion
. Because of course an operating system
should be in the business of managing ads.
Oddly, given these changes, SUPPLEMENTAL_PROCESS_SERVICE
was removed in DP2 after
having been added in DP1.
What Fulfills Some Developer Fantasies
I see a fair number of developers wanting to block or the screenshots shown in the overview
screen. FLAG_SECURE
blocks them nicely, but also blocks all other types of screenshots.
Some developers have been trying to play games with replacing the activity content
at just the right time, like that’s ever going to be reliable. Now, we finally can
opt out of those screenshots.
What Helps Me Stay Awake At Night
You can now call setStateListener()
on a WakeLock
and register a WakeLockStateListener
,
letting you know if the WakeLock
is enabled or disabled.
Mostly, this appears to be for cases where you want the wakelock to be enabled,
but the system decided to disable it.
What Else Caught My Eye
There are a bunch of new feature identifiers,
including ones that seem tied to the 12L merger (e.g., FEATURE_EXPANDED_PICTURE_IN_PICTURE
).
Android 12L added activity embedding, as one way to take advantage of larger
screen sizes. 13 DP2 appears to let you control which apps can embed your
activities, via android:knownActivityEmbeddingCerts
attributes on <application>
and <activity>
. And, it appears that you can
allow arbitrary apps to embed your activity via android:allowUntrustedActivityEmbedding
.
There is now an EthernetNetworkSpecifier
,
suggesting a continued push into less-mobile devices, like perhaps TVs.
There is now a Dumpable
and DumpableContainer
pair of interfaces,
for dumping stuff to a PrintWriter
.
showNext()
and showPrevious()
on RemoteViews
are now deprecated
in favor of setDisplayedChild()
.
There is a new LocaleConfig
class,
with a broken explanation.
There is a new isLowPowerStandbyEnabled()
method on PowerManager
, continuing the War on Background Processing. Because
when you have 18 power modes, adding a 19th will help. On the other end of the
power spectrum, you can now find out if you are plugged into a dock,
though I now really want BATTERY_PLUGGED_WHARF
and BATTERY_PLUGGED_PIER
as well.
And it still seems as though changes are afoot for android:sharedUserId
, as
the new EXTRA_NEW_UID
will tell you about changes to uids.
—Mar 19, 2022
Random Musings on the Android 13 Developer Preview 1
Each time Google releases a new developer preview, I rummage through
the API differences report
the high-level overviews,
and even the release blog post,
to see if there are things that warrant more attention from
developers. I try to emphasize mainstream features that any developer
might reasonably use, along with things that may not
get quite as much attention, because they are buried in the JavaDocs.
I am not feeling the best today, so I apologize if that impacts the quality of this post.
What Gives Me the “Time Has No Meaning” Vibe
12L has not shipped in final form yet, and we already have a 13 developer preview?
Even more surprising is the timeline, indicating that a final edition of 13 might
ship as early as August.
My initial reaction to 12L was that schedules slipped, so they elected to move
tablet-focused items out of the 12 release timeframe. Now, I do not what to think.
However, you will want to plan on getting your 13 compatibility testing done a bit
earlier than you have had to in previous years.
What Makes Me Want to Pick a Peck of Pickle Photos
ACTION_PICK_IMAGES
is interesting.
I am uncertain what the advantage is of a new Intent
action over having ACTION_OPEN_DOCUMENT
use a different UI for image/video MIME types. Still, anything to improve content access
for developers is a positive thing.
Note that the photo picker seems to be backed by CloudMediaProvider
objects.
These appear to serve the same role for the photo picker that document providers serve
for the Storage Access Framework. If your app is in the business of making photos available,
particularly from collections that MediaStore
does not index (e.g., cloud), you may
want to pay close attention to CloudMediaProvider
.
What Makes Me Want to Speak in Tongues
Per-app language preferences
is a very nice improvement. As I wrote about a month ago,
developers use hacks to try to get this sort of behavior, and having an official solution
is great! Even better is a statement about Jetpack support for older devices.
Still, most of my questions from that earlier post
remain unanswered. For example, if the device language is English and the app
language is Spanish, and we use ACTION_PICK_IMAGES
, what language is used by the photo picker?
From an API standpoint, you can bring up the relevant Settings screen via
ACTION_APP_LOCALE_SETTINGS
.
In theory, you can react to changes via ACTION_APPLICATION_LOCALE_CHANGED
,
but that apparently requires an undocumented READ_APP_SPECIFIC_LOCALES
permission. Hopefully, there is a configuration
change when the app language changes, just as there is a configuration change when the
device language changes. LocaleManager
lets you directly manipulate the user’s
selection of language.
What Other High-Profile Things Are Nice to See
If you need to talk to local WiFi devices, the NEARBY_WIFI_DEVICES
permission
probably is a big help. This is a common requirement for bootstrapping IoT devices, for example.
JDK 11 support is nice.
If it only goes back to Android 12, it will be years before it matters, but it is still nice.
Programmable shaders sound
promising, if you’re into that sort of thing. Similarly, if you have had hyphenation
performance anxiety before, faster hyphenation
is nice, except that it will be years before that improvement is something that is out
for a majority of devices.
And, for the ~148 developers writing tiles, helping users add your tiles
is a handy thing.
What High-Profile Things Make Me Yawn
I am somewhat mystified by “Intent filters block non-matching intents”,
in terms of what the actual problem is that is being solved. This does not appear
to be a security thing, as external apps can still start your components — they
just cannot do so via a purely explicit Intent
.
Themed app icons
continues Google’s Material You initiative. Color me uninterested.
What Was Rumored But That Google Is Hiding
The Android Resource Economy (TARE) is yet another salvo in The War on
Background Processing. Mishaal Rahman reports that it is there,
but it appears that Google did not document it.
By contrast, POST_NOTIFICATIONS
— the permission that you need to hold
to raise notifications — is in the JavaDocs
but is not mentioned in the required app changes documentation. My guess is that
this is a documentation gap. Mishaal reports that
it will only be enforced for apps targeting API 33.
If true, this gives developers a year to ignore it, only to then scramble
at the last minute to deal with the change.
(not you, of course — you are reading this blog post, so clearly you
are a forward-thinking developer)
Mishaal also mentions that the clipboard will automatically clear,
which is a win for privacy, but really ought to be pointed out to developers
beyond this blog post.
What Makes Me Scratch My Head, But Over There, Not Here
There is a new NEARBY_STREAMING_POLICY
permission.
The underlying policy “controls whether to allow the device to stream its notifications and apps to nearby devices”
(emphasis added).
There is also canDisplayOnRemoteDevices
,
which says “whether the activity can be displayed on a remote device which may or may not be running Android”
(emphasis also added).
This makes me wonder what Google is up to.
What’s Old is New Again
Android 12 added a mandatory splash screen. Android 13 appears to make that less mandatory:
a launcher could try to use setSplashScreenStyle()
with SPLASH_SCREEN_STYLE_EMPTY
to perhaps inhibit that splash screen. At least, that is how I interpret this API.
What Requires Better Penmanship Than I Possess
Handwriting is getting system-level love, such as supportsStylusHandwriting
.
This matters little to me, as my handwriting sucks.
What Are Other Nice Changes
There is a new, non-dangerous
READ_BASIC_PHONE_STATE
permission.
It is unclear what you get access to with that permission, but READ_PHONE_STATE
seems overused.
Speaking of permissions, not only can you request runtime permissions, but on
Android 13, you can revoke ones that you were granted earlier.
One of the long-standing problems with registerReceiver()
is that the resulting
BroadcastReceiver
was always exported. This is not great from a security standpoint.
Now, it appears as though we can control this.
A popular request in places like Stack Overflow is for a way to get the current time,
from a time source that cannot be modified by the user. Android 13 gives us
currentNetworkTimeClock()
,
which reports the time from a network time source (e.g., SNTP). As the docs
note, this time still could be modified, but not easily.
What Will Require Some Work
All those PackageManager
methods that took a “flags” int
? They are all deprecated and
replaced with ones that take richer objects.
If you work with Parcel
directly, there are lots of deprecations and lots of replacements.
What Else Might Break Your Apps
There is a new BODY_SENSORS_BACKGROUND
permission. Presumably, it is required for background apps that wish to read heart
rate or similar data, such as on Wear OS. This permission has scary language
about being “a hard restricted permission which cannot be held by an app until the
installer on record allowlists the permission”. If your app already requests
BODY_SENSORS
, pay close attention to what eventually gets documented
about the need for BODY_SENSORS_BACKGROUND
.
There is a new “light idle mode”, as seen in isDeviceLightIdleMode()
and ACTION_DEVICE_LIGHT_IDLE_MODE_CHANGED
.
This is when “when a device has had its screen off for a short time, switching it into a batching mode where we execute jobs, syncs, networking on a batching schedule”.
The “networking” aspect of this is particularly disconcerting, and hopefully more
will be explained about this mode.
Some methods were outright removed from the SDK, mostly in android.webkit
.
What Else Might Break Your Apps In the Not-Too-Distant Future
android:sharedUserId
is already deprecated. Google appears to be working on migration
paths for apps that presently rely upon it, such as sharedUserMaxSdkVersion
and EXTRA_UID_CHANGING
.
My guess is that android:sharedUserId
will be ignored in some future Android release.
If you are relying upon android:sharedUserId
, start work on some alternative mechanism,
and watch for documentation on how best to migrate to a non-sharedUserId
world.
What Really Needs Documentation
There is a new system service, advertised under the SUPPLEMENTAL_PROCESS_SERVICE
name. It is unclear what this is for.
Mishaal Rahman writes about “hub mode”,
and the docs have things like showClockAndComplications
that seem to tie into that. Perhaps this will debut in a later developer preview.
There is a TvIAppManager
,
described as being the “system API to the overall TV interactive application framework (TIAF) architecture, which arbitrates interaction between applications and interactive apps”.
Right now, that system service has no methods, so the fact that it is undocumented is
not a huge loss. This too might show up in some later developer preview.
There are a bunch of new KeyEvent
keycodes
that really could use some explanation (e.g., what is “Video Application key #1”, exactly?).
—Feb 12, 2022
Navigating in Compose: Criteria
Navigating between screens is a common act in an Android app… though, as
Zach Klippenstein noted,
“screen” is a somewhat amorphous term. Naturally, we want to be able to navigate
to different “screens” when those screens are implemented as composables.
How we do this is a highly contentious topic.
Roughly speaking, there seems to be four major categories of solutions:
-
Use the official Jetpack Navigation for Compose
-
Use some sort of wrapper or helper around Navigation for Compose —
Rafael Costa’s compose-destinations
library
is an example
-
Use Jetpack Navigation, but use the “classic” implementation instead of Navigation
for Compose, using fragments to wrap your screen-level composables
-
Use some separate navigation implementation, such as Adriel Café’s Voyager library
I cannot tell you what to use. I can tell you that you should come up with a set
of criteria for judging various navigation solutions. Based on a survey of a bunch
of navigation solutions, here are some criteria that you may want to consider.
Table Stakes
If your navigation solution does not support forward navigation to N destinations,
or if it does not support back stacks (e.g., a goBack()
function to pop a destination
off the stack and return to where you had been before), use something else.
Compile-Time Type Safety
One key point of using Kotlin, and Java before it, is type safety. The more type
safety we get, the more likely it is that we will uncover problems at compile-time,
rather than only via testing or by the app going 💥 for your users.
…For Routes/Destinations
When you tell the navigation solution where to navigate to in forward navigation,
you may want to prefer solutions where the identifier is something that is type safe.
Some solutions use strings or integers to identify routes or destinations. That makes
it very easy to do some really scary things, like compute a destination using math.
Generally, primitives-as-identifiers offer little compile-time protection. You might prefer
solutions that use enums, sealed class
, marker interfaces, or other things that
identify what are and are not valid options.
(and if you are asking yourself “what about deeplinks?”, that is covered a bit later)
…For Arguments
Frequently, our screens need data, whether an identifier (e.g., primary key) or
the actual data itself. So, we want to be able to pass that data from previous
screens. All else being equal, you might want to prefer solutions that offer compile-time
type safety, so you do not wind up in cases where you provide a string and the recipient
is expecting an Int
instead.
A related criteria is “content safety”. You might want to prefer solutions where your
code can just pass the data, without having to worry about whether it complies with
any solution-specific limitations. For example, if the solution requires you to URL-encode
strings to be able to pass them safely, that is not great, as you will forget to do this from
time to time. Ideally, the solution handles those sorts of things for you.
…For Return Values
At least for modal destinations, such as dialogs, we often need to pass back some
sort of “result”. For example, we display a dialog to allow the user to pick something,
and we need the previous screen to find out what the user selected. Sometimes, there
are ways of accomplishing this outside of a navigation solution, such as the dialog
updating some shared data representation (e.g., shared Jetpack ViewModel
) where
the previous screen finds out about results reactively. But, if the navigation solution
you are considering offers return values, and you intend to use them, you might want
to prefer ones where those return values are also type-safe and content-safe.
IOW, forward-navigation arguments should not get all the safety love.
Support for Configuration Change and Process Death
Like it or not, configuration changes are real. Birds, perhaps not.
One way or another, your app needs to be able to cope with configuration changes,
and your navigation solution should be able cope as well, to support your app.
This includes both retaining the navigation data itself across configuration changes
and, ideally, having a pattern for app data for your screens to survive as well
(e.g., Navigation for Compose’s per-route ViewModel
support).
Related is process death:
-
The user uses your app for a while
-
The user gets distracted by some not-a-bird for a while, and your app’s UI moves to the background
-
While in the background, Android terminates your process to free up system RAM
-
The user returns to your app after your process dies, but within a reasonable period of time
(last I knew, the limit was 30 minutes, though that value may have changed over the years)
Android is going to want to not only bring up your app, but pretend that your process
had been around all that time. That is where “saved instance state” comes into play,
and ideally your navigation solution advertises support for this, so your back-stack
and so on get restored along with your UI.
Hooks For Stuff You Might Use
Only you know what your app is going to need to do in terms of its UI. Or perhaps
your designers know, or your product managers. Or, hey, maybe you are just spraying
pixels around like Jackson Pollock sprayed paint.
Who am I to judge?
Regardless, there may be some things that you want in your app’s UI or flow that
tie into what you will need out of your navigation solution.
Many apps use these sorts of UI constructs. It may not be essential that they be handled
via a navigation solution — you might be able to model them as being “internal implementation”
of a screen, for example. But, it would be good to get a sense of what patterns
are established, if any, for a particular navigation solution to tie into these
kinds of UI constructs. For example, if you need to able to not only navigate to a screen, but
to a particular tab or page within that screen, it would be nice if the navigation
solution supported that. Perhaps not essential, but nice.
And, for some of these UI constructs, you might be seeking to have multiple back stacks. For example,
you might want to have it so that back navigation within a tab takes you to previous content
within that tab, rather than going back to other tabs that the user previously visited.
Support for multiple back stacks seems to be a bit of an advanced feature, so if this
is important to you, see what candidate navigation solutions offer.
Deeplinks
Deeplinks are popular. Here, by “deeplink”, I not only mean situations where a destination
is triggered from outside of the app, such as from a link on a Web page. I also mean
cases where a destination is determined at runtime based on data from an outside
source, such as a server-driven “tip of the day” card that steers users to specific
screens within the app.
If you think that you will need such things, it will be helpful if your navigation
solution supports them directly. That support may not be required — just as your
other app code can navigate to destinations, your “oh, hey, I got a deeplink” code
can navigate to destinations. However, a navigation solution may simplify that,
particularly for cases where the deeplink is from outside of the app and you need
to decide what to do with the already-running app and its existing back stack.
When evaluating deeplink support, one criteria that I will strongly suggest is:
deeplinks should be opt-in. Not every screen in your app should be directly
reachable by some outside party just by being tied into some navigation system
— that can lead to some security problems.
Also, consider how data in the deeplink will get mapped to your arguments (at least
for routes that take arguments). Some navigation solutions will try to handle
that automatically for you, but be wary of solutions that use deeplinks as an excuse
to be weak on type safety. Ideally, there should be an unambiguous way to convert pieces
of a deeplink (e.g., path segments, query parameters) to navigation arguments, but
in a way that limits any “stringly typed” logic to deeplinks themselves and does not
break type safety elsewhere.
Transitions
Your designers might call for a specific way to transition from screen X to screen
Y, such as a slide, a fade, a slide-and-fade, a fireworks-style explosion destroying
X with Y appearing behind it, etc. Ideally, the navigation solution would handle
those sorts of transitions, particularly if you need to control the back-navigation
transition as well (e.g., the exploded fireworks somehow reassembling themselves into a screen,
because that sounds like fun).
Development Considerations
Does the library have clear documentation? Does it seem to be maintained? Does it
have a clear way of getting support? Does it have a license that is compatible with
your project? These are all common criteria for any library, and navigation solutions
are no exception.
Beyond that, navigation solutions have a few specific things that you might want to
consider, such as:
-
How easily can you support navigation where the destinations might reside in different
modules? Particularly for projects that go with a “feature module” development model,
it is likely that you need a screen in module A to be able to navigate to a screen in
module B.
-
Are there clear patterns for using @Preview
? In principle, a navigation solution
should not get in the way of using @Preview
for screen-level composables, but it would
be painful if it did.
-
Does the solution work for development targets beyond Android? Perhaps you are not
planning on Compose for Desktop or Compose for Web or
Compose for iOS or
Compose for Consoles. If you are, you
are going to want to consider if and how navigation ties into your Kotlin/Multiplatform
ambitions.
This is not a complete list — if there are things that you think are fairly popular
that I missed, reach out!
—Jan 22, 2022
Compose for Wear: CurvedRow() and CurvedText()
Compose UI is not just for phones, tablets, foldables, notebooks, and desktops.
Compose UI is for watches as well, via the Compose for Wear set of libraries.
(Google calls it “Wear Compose” on that page, but that just makes me think
“Wear Compose? There! There Compose!”).
(and, yes, I’m old)
Compose for Wear has a bunch of composables designed for the watch experience.
In particular, Compose for Wear has support for having content curve to match
the edges of a round Wear OS device.
The Compose for Wear edition of Scaffold()
has a timeText
parameter. This
is a slot API, taking a composable as a value, where typically you will see that
composable delegate purely to TimeText()
. That gives you the current time
across the top of the watch screen, including curving that time on round screens:

The implementation of TimeText()
uses CurvedRow()
and CurvedText()
to accomplish this,
if the code is running on a round device. Otherwise, it uses the normal Row()
and
Text()
composables,
TimeText()
is a bit overblown, particularly for a blog post, so
this sample project
has a SimpleTimeText()
composable with a subset of the functionality:
@ExperimentalWearMaterialApi
@Composable
fun SimpleTimeText(
modifier: Modifier = Modifier,
timeSource: TimeSource = TimeTextDefaults.timeSource(TimeTextDefaults.timeFormat()),
timeTextStyle: TextStyle = TimeTextDefaults.timeTextStyle(),
contentPadding: PaddingValues = PaddingValues(4.dp)
) {
val timeText = timeSource.currentTime
if (LocalConfiguration.current.isScreenRound) {
CurvedRow(modifier.padding(contentPadding)) {
CurvedText(
text = timeText,
style = CurvedTextStyle(timeTextStyle)
)
}
} else {
Row(
modifier = modifier
.fillMaxSize()
.padding(contentPadding),
verticalAlignment = Alignment.Top,
horizontalArrangement = Arrangement.Center
) {
Text(
text = timeText,
style = timeTextStyle,
)
}
}
}
We can determine whether or not the screen is round from the isScreenRound
property on the Configuration
, which we get via LocalConfiguration.current
.
If the screen is round, we display the current time in a CurvedText()
and wrap
that in a CurvedRow()
. CurvedText()
knows how to have the letters of the text
follow the curve of the screen, and CurvedRow()
knows how to have child composables
follow the curve of the screen.
The timeText
slot parameter in Scaffold()
puts the time at the top of the
screen by default. That position is controlled by the anchor
parameter to
CurvedRow()
, where the default anchor
is 270f
. anchor
is measured in degrees,
and 270f
is the value for the top of the screen (probably for historical reasons).
SampleRow()
in that sample project lets us display multiple separate strings
via individual CurvedText()
composables, in a CurvedRow()
with a custom anchor
value:
@Composable
private fun SampleRow(anchor: Float, modifier: Modifier, vararg textBits: String) {
CurvedRow(
modifier = modifier.padding(4.dp),
anchor = anchor
) {
textBits.forEach { CurvedText(it, modifier = Modifier.padding(end = 8.dp)) }
}
}
SampleRow()
accepts a Modifier
and tailors it to add a bit of padding to the CurvedRow()
.
We can then use SampleRow()
to display text in other positions on the screen:
@ExperimentalWearMaterialApi
@Composable
fun MainScreen() {
Scaffold(
timeText = {
SimpleTimeText()
},
content = {
Box(modifier = Modifier.fillMaxSize(), contentAlignment = Alignment.Center) {
Text(text = "Hello, world!")
SampleRow(anchor = 180f, modifier = Modifier.align(Alignment.CenterStart), "one", "two", "three")
SampleRow(anchor = 0f, modifier = Modifier.align(Alignment.CenterEnd), "uno", "dos", "tres")
SampleRow(anchor = 90f, modifier = Modifier.align(Alignment.BottomCenter), "eins", "zwei", "drei")
}
}
)
}
An anchor
of 0f
is the end edge of the screen, 90f
is the bottom, and 180f
is the start edge.
Note that we also use align()
to control the positioning within the Box()
, with values
that line up with our chosen anchor
values.
The result is that we have text on all four edges, plus a centered “Hello, world!”:

CurvedRow()
does not handle consecutive bits of CurvedText()
all that well —
ideally, use a single CurvedText()
with the combined text. However, CurvedRow()
is not limited to CurvedText()
, and I hope to explore that more in a future blog post.
—Jan 17, 2022
Older Posts