前言

Nowadays, more and more mobile phones are using deep learning for tasks such as image classification, object detection, and style transfer. Previously, users had to submit data to a server to get results. However, submitting data to a server has several drawbacks: first, there is the speed issue—the time to upload images to the server and receive results back can take up most of the process time, slowing down the overall prediction speed. Second, as mobile phone performance continues to improve, modern phones are capable of performing deep learning predictions locally. Additionally, there is the privacy concern: if predictions are done locally, users don’t need to upload their images, significantly enhancing security. Therefore, in this chapter, we will learn how to deploy a PaddlePaddle prediction model to Android phones.

Compiling the paddle-mobile Library

To deploy a trained PaddlePaddle prediction library to an Android phone, we need the paddle-mobile framework. This framework is designed to facilitate deploying PaddlePaddle models to mobile devices like Android phones, iOS devices, and Raspberry Pi. It optimizes performance for mobile hardware, ensuring better prediction speed on these devices.

There are two methods to compile the Android-compatible paddle-mobile library: using Docker for compilation and using Ubuntu for cross-compilation.

Using Docker for Compilation

The following steps are executed as the root user for simplicity:

  1. Install Docker (Ubuntu-specific command):
apt-get install docker.io
  1. Clone the paddle-mobile repository:
git clone https://github.com/PaddlePaddle/paddle-mobile.git
  1. Navigate to the paddle-mobile directory and build the Docker image:
cd paddle-mobile
# Compilation may take a long time
docker build -t paddle-mobile:dev - < Dockerfile

After compilation, verify the image with:

docker images

Example output:

root@test:/home/test# docker images
REPOSITORY                          TAG                 IMAGE ID            CREATED             SIZE
paddle-mobile                       dev                 fffbd8779c68        20 hours ago        3.76 GB
  1. Run the image and enter the container (from the paddle-mobile root directory):
docker run -it -v $PWD:/paddle-mobile paddle-mobile:dev
  1. Inside the container, execute:
root@fc6f7e9ebdf1:/# cd paddle-mobile/
root@fc6f7e9ebdf1:/paddle-mobile# cmake -DCMAKE_TOOLCHAIN_FILE=tools/toolchains/arm-android-neon.cmake
  1. (Optional) Use ccmake . to configure settings (e.g., set NET to googlenet for smaller models). Save with c and exit with g.
...
CMAKE_INSTALL_PREFIX             /usr/local
CMAKE_TOOLCHAIN_FILE             /paddle-mobile/tools/toolchains/arm-android-neon.cmake
CPU                              ON
DEBUGING                         ON
FPGA                             OFF
LOG_PROFILE                      ON
MALI_GPU                         OFF
NET                              default
USE_EXCEPTION                    ON
USE_OPENMP                       ON
...
  1. Build the library:
root@fc6f7e9ebdf1:/paddle-mobile# make
  1. Exit the container:
root@fc6f7e9ebdf1:/paddle-mobile# exit
  1. The compiled library libpaddle-mobile.so will be in the build directory:
root@test:/home/test/paddle-mobile/build# ls
libpaddle-mobile.so

Cross-Compiling with Ubuntu

  1. Download and extract the Android NDK:
wget https://dl.google.com/android/repository/android-ndk-r17b-linux-x86_64.zip
unzip android-ndk-r17b-linux-x86_64.zip
  1. Set the NDK environment variable:
export NDK_ROOT="/home/test/android-ndk-r17b"

Verify with:

root@test:/home/test# echo $NDK_ROOT
/home/test/android-ndk-r17b
  1. Install CMake (version ≥3.11):
wget https://cmake.org/files/v3.11/cmake-3.11.2.tar.gz
tar -zxvf cmake-3.11.2.tar.gz
cd cmake-3.11.2
./bootstrap
make
make install

Verify with:

cmake --version
  1. Clone the paddle-mobile repository:
git clone https://github.com/PaddlePaddle/paddle-mobile.git
  1. Compile the library:
cd paddle-mobile
cmake -DCMAKE_TOOLCHAIN_FILE=tools/toolchains/arm-android-neon.cmake
make
  1. The compiled library libpaddle-mobile.so will be in paddle-mobile/build.

Creating the Android Project

Use Android Studio to create a basic Android project (without C++ support initially, as we already compiled the CPP library). Follow these steps:

  1. In the main directory, create assets/infer_model to store the PaddlePaddle prediction model. For this chapter, use the model from PaddlePaddle From Entry to Training - Custom Image Dataset Recognition and copy it here.

  2. Create jniLibs in main to store the compiled libpaddle-mobile.so.

  3. Add permissions in the Android manifest:

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
  1. Modify activity_main.xml:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">

    <LinearLayout
        android:id="@+id/ll"
        android:orientation="horizontal"
        android:layout_alignParentBottom="true"
        android:layout_width="match_parent"
        android:layout_height="50dp">

        <Button
            android:layout_weight="1"
            android:id="@+id/load"
            android:text="Load Model"
            android:layout_width="0dp"
            android:layout_height="match_parent" />

        <Button
            android:id="@+id/clear"
            android:layout_weight="1"
            android:text="Clear"
            android:layout_width="0dp"
            android:layout_height="match_parent" />

        <Button
            android:id="@+id/infer"
            android:layout_weight="1"
            android:text="Predict Image"
            android:layout_width="0dp"
            android:layout_height="match_parent" />
    </LinearLayout>

    <TextView
        android:layout_above="@id/ll"
        android:id="@+id/show"
        android:hint="Prediction Result"
        android:layout_width="match_parent"
        android:layout_height="100dp" />

    <ImageView
        android:id="@+id/image_view"
        android:layout_above="@id/show"
        android:layout_width="match_parent"
        android:layout_height="match_parent" />
</RelativeLayout>
  1. Create com.baidu.paddle package and PML.java to call the native library:
package com.baidu.paddle;

public class PML {
    public static native void setThread(int threadCount);
    public static native boolean load(String modelDir);
    public static native boolean loadQualified(String modelDir);
    public static native boolean loadCombined(String modelPath, String paramPath);
    public static native boolean loadCombinedQualified(String modelPath, String paramPath);
    public static native float[] predictImage(float[] buf, int[] ddims);
    public static native float[] predictYuv(byte[] buf, int imgWidth, int imgHeight, int[] ddims, float[] meanValues);
    public static native void clear();
}
  1. Create Utils.java for image processing and model operations:
package com.baidu.paddle;

import android.content.Context;
import android.database.Cursor;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.net.Uri;
import android.provider.MediaStore;

public class Utils {
    // Get the label with the highest probability
    public static int getMaxResult(float[] result) {
        float probability = result[0];
        int label = 0;
        for (int i = 0; i < result.length; i++) {
            if (probability < result[i]) {
                probability = result[i];
                label = i;
            }
        }
        return label;
    }

    // Convert image to float array for prediction
    public static float[] getScaledMatrix(Bitmap bitmap, int desWidth, int desHeight) {
        float[] dataBuf = new float[3 * desWidth * desHeight];
        int[] pixels = new int[desWidth * desHeight];
        bitmap.getPixels(pixels, 0, desWidth, 0, 0, desWidth, desHeight);
        int index = 0;
        for (int i = 0; i < desHeight; i++) {
            for (int j = 0; j < desWidth; j++) {
                int clr = pixels[i * desWidth + j];
                dataBuf[index++] = ((clr >> 16) & 0xFF) / 255.0f; // R
                dataBuf[index++] = ((clr >> 8) & 0xFF) / 255.0f;  // G
                dataBuf[index++] = (clr & 0xFF) / 255.0f;         // B
            }
        }
        return dataBuf;
    }

    // Compress image to avoid memory issues
    public static Bitmap getScaleBitmap(String filePath) {
        BitmapFactory.Options opt = new BitmapFactory.Options();
        opt.inJustDecodeBounds = true;
        BitmapFactory.decodeFile(filePath, opt);
        int width = opt.outWidth, height = opt.outHeight;
        opt.inSampleSize = 1;
        while (width / opt.inSampleSize > 500 || height / opt.inSampleSize > 500) {
            opt.inSampleSize *= 2;
        }
        opt.inJustDecodeBounds = false;
        return BitmapFactory.decodeFile(filePath, opt);
    }

    // Convert URI to image path
    public static String getPathFromURI(Context context, Uri uri) {
        String[] projection = {MediaStore.Images.Media.DATA};
        Cursor cursor = context.getContentResolver().query(uri, projection, null, null, null);
        if (cursor == null) return uri.getPath();
        cursor.moveToFirst();
        int columnIndex = cursor.getColumnIndexOrThrow(MediaStore.Images.Media.DATA);
        String path = cursor.getString(columnIndex);
        cursor.close();
        return path;
    }

    // Copy model files from assets to cache
    public static void copyFileFromAsset(Context context, String oldPath, String newPath) {
        try {
            String[] files = context.getAssets().list(oldPath);
            if (files.length > 0) {
                new File(newPath).mkdirs();
                for (String file : files) {
                    copyFileFromAsset(context, oldPath + "/" + file, newPath + "/" + file);
                }
            } else {
                InputStream is = context.getAssets().open(oldPath);
                FileOutputStream fos = new FileOutputStream(new File(newPath));
                byte[] buffer = new byte[1024];
                int len;
                while ((len = is.read(buffer)) != -1) fos.write(buffer, 0, len);
                is.close();
                fos.close();
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
  1. Modify MainActivity.java:
package com.baidu.paddle;

import android.app.Activity;
import android.content.Intent;
import android.database.Cursor;
import android.graphics.Bitmap;
import android.net.Uri;
import android.os.Bundle;
import android.provider.MediaStore;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;

public class MainActivity extends Activity {
    private String modelPath;
    private final int[] ddims = {1, 3, 224, 224}; // Batch, Channels, Width, Height
    private ImageView imageView;
    private TextView showTv;

    static {
        System.loadLibrary("paddle-mobile");
    }

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        modelPath = getCacheDir().getAbsolutePath() + "/infer_model";
        Utils.copyFileFromAsset(this, "infer_model", modelPath);

        imageView = findViewById(R.id.image_view);
        showTv = findViewById(R.id.show);
        Button loadBtn = findViewById(R.id.load);
        Button clearBtn = findViewById(R.id.clear);
        Button inferBtn = findViewById(R.id.infer);

        loadBtn.setOnClickListener(v -> {
            boolean success = PML.load(modelPath);
            Toast.makeText(this, success ? "Model loaded" : "Model load failed", Toast.LENGTH_SHORT).show();
        });

        clearBtn.setOnClickListener(v -> {
            PML.clear();
            Toast.makeText(this, "Model cleared", Toast.LENGTH_SHORT).show();
        });

        inferBtn.setOnClickListener(v -> {
            if (PML.load(modelPath)) {
                Intent intent = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
                startActivityForResult(intent, 1);
            } else {
                Toast.makeText(this, "Load model first", Toast.LENGTH_SHORT).show();
            }
        });
    }

    @Override
    protected void onActivityResult(int requestCode, int resultCode, Intent data) {
        super.onActivityResult(requestCode, resultCode, data);
        if (resultCode == RESULT_OK && requestCode == 1) {
            Uri uri = data.getData();
            String imagePath = Utils.getPathFromURI(this, uri);
            Bitmap bmp = Utils.getScaleBitmap(imagePath);
            imageView.setImageBitmap(bmp);
            predictImage(imagePath);
        }
    }

    private void predictImage(String imagePath) {
        Bitmap bmp = Utils.getScaleBitmap(imagePath);
        float[] inputData = Utils.getScaledMatrix(bmp, ddims[2], ddims[3]);
        try {
            long start = System.currentTimeMillis();
            float[] result = PML.predictImage(inputData, ddims);
            long end = System.currentTimeMillis();
            int label = Utils.getMaxResult(result);
            String[] labels = {"Apple", "Hami Melon", "Carrot", "Cherry", "Cucumber", "Watermelon"};
            showTv.setText(String.format("Label: %d\nName: %s\nProbability: %.2f\nTime: %dms",
                    label, labels[label], result[label], end - start));
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

Running the Project

After setting up the project, select an image from the gallery to predict. The expected result will look like:

Prediction Result

GitHub Repository

The complete code is available at:
https://github.com/yeyupiaoling/LearnPaddle2/tree/master/note15

Previous Chapter

PaddlePaddle From Entry to Training - Deploying Models to Servers

References

  1. https://github.com/PaddlePaddle/paddle-mobile
  2. https://blog.csdn.net/qq_33200967/article/details/81066970
Xiaoye