The following is the first few sections of a chapter from The Busy Coder's Guide to Android Development, plus headings for the remaining major sections, to give you an idea about the content of the chapter.

The Media Projection APIs

Android 5.0 debuted the ability for Android apps to take screenshots of whatever is in the foreground. It further allows apps to record full-resolution video of whatever is in the foreground, for screencasts, product demo videos, and the like. For whatever reason, this is called “media projection”, and is based around classes like MediaProjectionManager.

In this chapter, we will explore how to use the media projection APIs to record screenshots and screencast-style videos.


Understanding this chapter requires that you have read the core chapters, plus the chapter on embedding a Web server in your app for debug and diagnostic purposes.

Having read the chapter on using the camera APIs would not be a bad idea, particularly for video recording, though it is not essential.

Requesting Screenshots

Here, “screenshot” (or “screen capture”) refers to generating an ordinary image file (e.g., PNG) of the contents of the screen. Most likely, you have created such screenshots yourself for a desktop OS (e.g., using the PrtSc key on Windows or Linux). Android’s development tools allow you to take screenshots of devices and emulators, and there is a cumbersome way for users to take screenshots using the volume and power keys.

The media projection APIs allow you to take a screenshot of whatever is in the foreground… which does not necessarily have to be your own app. Indeed, you can take screenshots of any app, plus of system-supplied UI, such as the pull-down notification shade.

Not surprisingly, this has privacy and security issues. As such, in order to be able to take screenshots, the user must agree to allow it. In particular, instead of a durable permission that the user might grant once and forget about, the user has to agree to allow your app to take screenshots every time you want to do so.

Introducing andprojector

In 2009, the author of this book wrote a utility called DroidEx. This tool ran on a desktop or notebook and served as a “software projector” for an Android device, as opposed to the hardware projectors (e.g., ELMO) usually needed to show an Android screen to a large audience. Under the covers, DroidEx used the same protocol that Android Studio and DDMS use for screenshots, requesting screenshots as fast as possible, drawing them to a Swing JFrame. Later, Jens Riboe took DroidEx a bit further, adding more of a Swing control UI, in the form of Droid@Screen.

The MediaProjection/andprojector sample project has the same objective as did DroidEx: be able to show the contents of an Android screen to an audience. Nowadays, you might be able to do that straight from hardware, using things like an MHL->HDMI adapter. However, sometimes that option is not available (e.g., the projector you are using for your notebook is limited to VGA). andprojector differs from DroidEx in a few key ways:

On the device, the UI resembles that of the Web server apps profiled elsewhere in this book. When launched, the screen is mostly empty, except for a phone action bar item:

andprojector, As Initially Launched
Figure 829: andprojector, As Initially Launched

When you tap the action bar item, a system-supplied dialog appears, asking for permission to take screenshots:

andprojector, Showing Permission Dialog
Figure 830: andprojector, Showing Permission Dialog

If you grant permission, you will see URLs that can be used to view what is on the device screen:

andprojector, Showing URLs
Figure 831: andprojector, Showing URLs

Entering one of those (including the trailing slash!) in a Web browser on some other machine on the same WiFi network will cause it to start showing the contents of the device screen. This can be done in either orientation, though it tends to work better in landscape.

Clicking the “stop” action bar item — which replaced the device action bar item when permission was granted — will stop the presentation and return the app to its original state.

With that in mind, let’s see how andprojector pulls off this bit of magic.

Asking for Permission

In the MainActivity that houses our UI, in onCreate(), we get our hands on a MediaProjectionManager system service, in addition to fussing with Material-style coloring for the status bar:

  protected void onCreate(Bundle savedInstanceState) {

    Window window=getWindow();



MediaProjectionManager, at the time of this writing (October 2015), has a grand total of two methods. When the user taps on the device action bar item, we invoke fully 50% of the MediaProjectionManager, calling createScreenCaptureIntent(). This will return an Intent, designed to be used with startActivityForResult(), that brings up the screenshot permission dialog:

  public boolean onOptionsItemSelected(MenuItem item) {
    if (item.getItemId() {
    else {
      stopService(new Intent(this, ProjectorService.class));

    return super.onOptionsItemSelected(item);

In onActivityResult(), if our request for permission was granted, we pass the details along via Intent extras to a ProjectorService that we start using startService():

  protected void onActivityResult(int requestCode, int resultCode,
                                  Intent data) {
    if (requestCode==REQUEST_SCREENSHOT) {
      if (resultCode==RESULT_OK) {
        Intent i=
            new Intent(this, ProjectorService.class)


The rest of the MainActivity is mostly doing the same sort of work as was seen in the sample apps from the chapter on embedding a Web server, including populating the ListView with the URLs for our projection.

Creating the MediaProjection

ProjectorService extends WebServerService, our reusable embedded Web server. However, most of its business logic — along with code extracted into a separate ImageTransmogrifier — involves fetching screenshots using the media projection APIs, generating PNGs for them, and pushing them over to the Web browser.

In onCreate() of ProejctorService, we:

  public void onCreate() {


    handler=new Handler(handlerThread.getLooper());

That HandlerThread is created in an initializer, since it does not directly depend on a Context:

  final private HandlerThread handlerThread=new HandlerThread(getClass().getSimpleName(),

In onStartCommand(), we then use the remaining 50% of the MediaProjectionService API to get a MediaProjection, using the values that were passed to onActivityResult() from our permission request which, in turn, were passed to ProjectorService via Intent extras:

        mgr.getMediaProjection(i.getIntExtra(EXTRA_RESULT_CODE, -1),

We then create an instance of ImageTransmogrifier, passing in the ProjectorService itself as a constructor parameter:

    it=new ImageTransmogrifier(this);

ImageTransmogrifier, in its constructor, sets about determining the screen size (using WindowManager and getDefaultDisplay()). Since high-resolution displays will wind up with very large bitmaps, and therefore slow down the data transfer, we scale the width and height until such time as each screenshot will contain no more than 512K pixels.

public class ImageTransmogrifier implements ImageReader.OnImageAvailableListener {
  private final int width;
  private final int height;
  private final ImageReader imageReader;
  private final ProjectorService svc;
  private Bitmap latestBitmap=null;

  ImageTransmogrifier(ProjectorService svc) {

    Display display=svc.getWindowManager().getDefaultDisplay();
    Point size=new Point();


    int width=size.x;
    int height=size.y;

    while (width*height > (2<<19)) {


    imageReader=ImageReader.newInstance(width, height,
        PixelFormat.RGBA_8888, 2);
    imageReader.setOnImageAvailableListener(this, svc.getHandler());

Finally, we create a new ImageReader, which boils down to a class that manages a bitmap Surface that can be written to, using our specified width, height, and bit depth. In particular, we are saying that there are two possible outstanding bitmaps at a time, courtesy of the 2 final parameter, and that we should be notified when a new image is ready, by registering the ImageTransmogrifier as the listener. The Handler is used so that we are informed about image availability on our designated background HandlerThread.

Back over in ProjectorService, we then as the MediaProjection to create a VirtualDisplay, tied to the ImageTransmogrifier and its ImageReader:

        it.getWidth(), it.getHeight(),
        VIRT_DISPLAY_FLAGS, it.getSurface(), null, handler);

We need to provide:

  static final int VIRT_DISPLAY_FLAGS=

  Surface getSurface() {

We also need to know about events surrounding the MediaProjection itself, so we create and register a MediaProjection.Callback, as part of the full onStartCommand() implementation:

  public int onStartCommand(Intent i, int flags, int startId) {
        mgr.getMediaProjection(i.getIntExtra(EXTRA_RESULT_CODE, -1),

    it=new ImageTransmogrifier(this);

    MediaProjection.Callback cb=new MediaProjection.Callback() {
      public void onStop() {

        it.getWidth(), it.getHeight(),
        VIRT_DISPLAY_FLAGS, it.getSurface(), null, handler);
    projection.registerCallback(cb, handler);


And, at this point, the device will start collecting screenshots for us.

Processing the Screenshots

Of course, it would be useful if we could actually receive those screenshots and do something with them.

We find out when a screenshot is available via the ImageReader.Callback we set up in ImageTransmogrifier, specifically its onImageAvailable() callback. Since ImageTransmogrifier itself is implementing the ImageReader.Callback interface, ImageTransmogrifier has the onImageAvailable() implementation:

  public void onImageAvailable(ImageReader reader) {
    final Image image=imageReader.acquireLatestImage();

    if (image!=null) {
      Image.Plane[] planes=image.getPlanes();
      ByteBuffer buffer=planes[0].getBuffer();
      int pixelStride=planes[0].getPixelStride();
      int rowStride=planes[0].getRowStride();
      int rowPadding=rowStride - pixelStride * width;
      int bitmapWidth=width + rowPadding / pixelStride;

      if (latestBitmap == null ||
          latestBitmap.getWidth() != bitmapWidth ||
          latestBitmap.getHeight() != height) {
        if (latestBitmap != null) {

            height, Bitmap.Config.ARGB_8888);


      if (image != null) {

      ByteArrayOutputStream baos=new ByteArrayOutputStream();
      Bitmap cropped=Bitmap.createBitmap(latestBitmap, 0, 0,
        width, height);

      cropped.compress(Bitmap.CompressFormat.PNG, 100, baos);

      byte[] newPng=baos.toByteArray();


This is complex.

First, we ask the ImageReader for the latest image, via acquireLatestImage(). If, for some reason, there is no image, there is nothing for us to do, so we skip all the work.

Otherwise, we have to go through some gyrations to get the actual bitmap itself from Image object. The recipe for that probably makes sense to somebody, but that “somebody” is not the author of this book. Suffice it to say, the first six lines of the main if block in onImageAvaialble() get access to the bytes of the bitmap (as a ByteBuffer named buffer) and determine the width of the bitmap that was handed to us (as an int named bitmapWidth).

Because Bitmap objects are large and therefore troublesome to allocate, we try to reuse one where possible. If we do not have a Bitmap (latestBitmap), or if the one we have is not the right size, we create a new Bitmap of the appropriate size. Otherwise, we use the Bitmap that we already have. Regardless of where the Bitmap came from, we use copyPixelsFromBuffer() to populate it from the ByteBuffer we got from the Image.Plane that we got from the Image that we got from the ImageReader.

You might think that this Bitmap would be the proper size. However, it is not. For inexplicable reasons, it will be a bit larger, with excess unused pixels on each row on the end. This is why we need to use Bitmap.createBitmap() to create a cropped edition of the original Bitmap, for our actual desired width.

We then compress() the cropped Bitmap into a PNG file, get the byte array of pixel data from the compressed result, and hand that off to the ProjectorService via updateImage().

updateImage(), in turn, holds onto this most-recent PNG file in an AtomicReference wrapped around the byte array:

  private AtomicReference<byte[]> latestPng=new AtomicReference<byte[]>();

This way, when some Web server thread goes to serve up this PNG file, we do not have to worry about thread contention with the HandlerThread we are using for the screenshots themselves.

Then, we iterate over all connected browsers’ WebSocket connections and send a unique URL to them, where the uniqueness (from SystemClock.uptimeMillis()) is designed as a “cache-busting” approach to ensure the browser always requests the image

  void updateImage(byte[] newPng) {

    for (WebSocket socket : getWebSockets()) {

Those WebSockets are enabled by ProjectorService calling serveWebSockets() on its WebServerService superclass, in the configureRoutes() callback:

  protected boolean configureRoutes(AsyncHttpServer server) {
    serveWebSockets("/ss", null);

      new ScreenshotRequestCallback());


The ScreenshotRequestCallback is an inner class of ProjectorService, one that serves the PNG file itself in response to a request:

  private class ScreenshotRequestCallback
      implements HttpServerRequestCallback {
    public void onRequest(AsyncHttpServerRequest request,
                          AsyncHttpServerResponse response) {

      byte[] png=latestPng.get();
      ByteArrayInputStream bais=new ByteArrayInputStream(png);

      response.sendStream(bais, png.length);

The result is that, whenever a screenshot is ready, we create the PNG file and tell the browser “hey! we have an update!”.


The Web content that is served to the browser is reminiscent of the HTML and JavaScript used in the section on implementing WebSockets. There, the messages being pushed to the browser were timestamps, shown in a list. Here, the messages being pushed to the browser are URLs to load a fresh screenshot.

Hence, the HTML just has an <img> tag for that screenshot, with an id of screen, loading screen/0 at the outset to bootstrap the display:

<img id="screen"
  style="height: 100%; width: 100%; object-fit: contain"
<script src="js/app.js"></script>

The JavaScript registers for a WebSocket connection, then updates that <img> with a fresh URL when such a URL is pushed over to the browser:

window.onload = function() {
    var screen=document.getElementById('screen');
    var ws_url=location.href.replace('http://', 'ws://')+'ss';
    var socket=new WebSocket(ws_url);

    socket.onopen = function(event) {
      // console.log(event.currentTarget.url);

    socket.onerror = function(error) {
      console.log('WebSocket error: ' + error);

    socket.onmessage = function(event) {;

Of course, in principle, there could be much more to the Web UI, including some ability to stop all of this when it is no longer needed. Speaking of which…

Shutting Down

The user can stop the screenshot collection and broadcasting either via the action bar item or the action in the Notification that is raised in support of the foreground service. In either case, in onDestroy(), in addition to chaining to WebServerService to shut down the Web server, ProjectorService stops the MediaProjection:

  public void onDestroy() {


This should also trigger our VirtualDisplay.Callback, causing us to release the VirtualDisplay.

Dealing with Configuration Changes

However, there is one interesting wrinkle we have to take into account: what happens if the user rotates the screen? We need to update our VirtualDisplay and ImageReader to take into account the new screen height and width.

ProjectorService will be called with onConfigurationChanged() when any configuration change occurs. This could be due to a screen rotation or other triggers (e.g., putting the device into a car dock). So, we need to see if the screen height or width changed — if not, we do not need to do anything. So, we create a new ImageTransmogrifier and compare its height and width to the current height and width:

  public void onConfigurationChanged(Configuration newConfig) {

    ImageTransmogrifier newIt=new ImageTransmogrifier(this);

    if (newIt.getWidth()!=it.getWidth() ||
      newIt.getHeight()!=it.getHeight()) {
      ImageTransmogrifier oldIt=it;

      vdisplay.resize(it.getWidth(), it.getHeight(),


If a dimension has changed, we tell the VirtualDisplay to resize to the new height and width, attach a new Surface from the new ImageReader, and switch over to the new ImageTransmogrifier, closing the old one.

This solution is not perfect — there is a bit of a race condition if a screenshot is taken while the configuration change is going on – but for a non-production-grade app it will suffice.

Recording the Screen

The preview of this section is en route to Mars.

Yet Another Sample: andshooter

The preview of this section was traded for a bag of magic beans.