A Peek at SurfaceControlViewHost in Android R
One of the items that I found interesting in
the second half of my R DP2 random musings
SurfaceControlViewHost. I experimented with it this week, and it at least
partially works. In a nutshell: one app can embed and display a live UI from another app.
For some developers, this sort of cross-app UI embedding has been “the Holy Grail”
for years. You can do a limited version of this with
RemoteViews, but the
widget set is minimal by modern standards. You could create your own
RemoteViews-like structure, but keeping all of the participating apps in sync
can get troublesome. Android 9’s slices… well, OK, those never really caught
But, with Android R and
SurfaceControlViewHost, it is not that hard to set
up cross-process UI delivery. There are no obvious limits as to what that UI
can look like, because the UI itself is not really shared. Instead, the two
processes seem to be sharing a
Surface, with the UI-supplying process rendering
a view hierarchy to that
Surface and the UI-hosting process displaying that
Surface as part of a
How Do You Make It Work?
Here are the basic mechanics:
Have two apps, with some sort of IPC channel between them. I elected to use a bound service, playing with Google’s
Messengerpattern for getting data between the apps. In the source code, you will see an
EmbedClientmodule that represent these two apps.
Have the UI client (
EmbedClient) set up a
SurfaceViewand identify the
Displayon which that
SurfaceViewwill appear. Then, it needs to send to the other app the dimensions of the
SurfaceView, the ID of the
Displayto use, and a “host token” obtained from the
getHostToken(). All of those can be stuffed into a
Bundlefor easy delivery via common IPC patterns (e.g., as part of a
Have the UI provider (
EmbedServer) set up that UI, such as via view binding. When it receives the details from the client, it can set up a
SurfaceControlViewHosttied to the
Displayand “host token”. It can then attach the root view of the view hierarchy to the
addView(). Then, it needs to obtain a
getSurfacePackage()) and send that back to the client.
Parcelable, so you can send it via any common IPC mechanism (e.g., as part of a return
Once the client receives the
SurfacePackage, attach it to the
And that’s it. At this point, the client should be showing the provided UI in
SurfaceView. If the provider updates that UI, the client should show the updates
in real time.
What About Input?
The docs indicate that touch events on the
should get sent from the client process to the provider process, with the implication
that this will trigger events on the widgets in the provider’s view hierarchy.
What’s Google Going to Do With This?
I have no idea.
Seriously, they could use this for:
A richer replacement for app widgets and slices
A richer option for custom views in notifications
Embedding any of their apps in any other one of their apps (e.g., more powerful options for launching a Hangout from Calendar)
But, my guess is that whatever they have in mind will be something I won’t expect.
What Can We Do With This?
Well, not much, insofar as this is only available on Android R. Since this requires
new methods on
SurfaceView, my guess is that this cannot be backported via a Jetpack
library. For the time being, approximately 0.0% of your user base is running Android R.
However, longer-term, this opens up some interesting possibilities.
From a security standpoint, this technique should allow for us to better sandbox untrusted content. We have had options for doing that, with dedicated low-permission processes, but they had only classic IPC ways of getting information out of the sandbox. Now, they can present a full UI, yet still not have any means of attacking the client displaying that UI.
Apps with a rich third-party ecosystem of plugins could adopt this for incrementally tighter
integration with those plugins. Right now, the only easy thing is for the app to
start an activity in the plugin, if the plugin needs to supply UI. Otherwise, you
were stuck with the UI integration options I mentioned earlier, like
Now, though, a plugin can provide finer-grained UI elements that could be embedded
in the core app’s UI, to offer a more seamless experience to the user.
Assuming that there is no significant performance overhead for delivering a UI this way, this opens the doors for popular content publishers to get their content embedded in other apps, yet still maintain complete control over that content.
But, once again, my guess is that the best use of this tech is something that I am not currently thinking of.
Ordinarily, I would have expected presentations on this at Google I|O. Now, in our I|O-free world, I do not know when or how Google might provide more information on this API and how they (and we) might use it. But, it’s something that I will be keeping an eye on, as it’s one of the more intriguing new additions in Android R.
Want an expert opinion on your Android app architecture decisions? Perhaps Mark Murphy can help!