Skip to content

Latest commit

 

History

History
85 lines (70 loc) · 4.47 KB

README.md

File metadata and controls

85 lines (70 loc) · 4.47 KB

Benchmark custom models

The custom model in the local benchmark tool currently only supports tf.GraphModel or tf.LayersModel.

If you want to benchmark more complex TensorFlow.js models with customized input preprocessing logic, you need to implement load and predictFunc methods, following this example PR.

Models in local file system

If you have a TensorFlow.js model in local file system, you can benchmark it by: locally host the local benchmark tool and the model on a http server. In addition, if the online local benchmark tool is blocked by CORS problems when fetching custom models, this solution also works.

Example

You can benchmark the MobileNet model in local file system through the following steps:

  1. Download the tool.
git clone https://github.com/tensorflow/tfjs.git
cd tfjs/e2e/benchmarks/
  1. Download the model.
wget -O model.tar.gz "https://tfhub.dev/google/tfjs-model/imagenet/mobilenet_v2_130_224/classification/3/default/1?tfjs-format=compressed"
mkdir model
tar -xf model.tar.gz -C model/
  1. Run a http server to host the model and the local benchmark tool.
npx http-server
  1. Open http://127.0.0.1:8080/local-benchmark/ through the browser.
  2. Select custom in the models field.
  3. Fill http://127.0.0.1:8080/model/model.json into the modelUrl field.
  4. Run benchmark.

Paths to custom models

The benchmark tool suopports three kinds of paths to the custom models.

URL

Examples:

LocalStorage

Store the model in LocalStorage at first. Run the following codes in the browser console:

const localStorageModel = tf.sequential(
     {layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
const saveResults = await localStorageModel.save('localstorage://my-model-1');

Then use "localstorage://my-model-1" as the custom model URL.

IndexDB

Store the model in IndexDB at first. Run the following codes in the browser console:

const indexDBModel = tf.sequential(
     {layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
const saveResults = await indexDBModel.save('indexeddb://my-model-1');

Then use "indexeddb://my-model-1" as the custom model URL.

Variable input shapes

If input shapes for you model contain dynamic dimension (i.e. for mobilenet shape = [-1, 224, 224, 3]), you are requires to set it to a valid shape [1, 224, 224, 3] before you can perform the benchmark. In the Inputs section you will see an input box for you to update the shape. Once the shape is set, you can click the 'Run benchmark' button again to run the benchmark.

Benchmark test

It's easy to set up a web server to host benchmarks and run against them via e2e/benchmarks/local-benchmark/index.html. You can manually specify the optional url parameters as needed. Here are the list of supported url parameters:

  • Model related parameters:

    architecture: same as architecture (only certain models has it, such as MobileNetV3 and posenet)
    benchmark: same as models
    inputSize: same as inputSizes
    inputType: same as inputTypes
    modelUrl: same as modelUrl, for custom models only
    ${InputeName}Shape: the input shape array, separated by comma, for custom models only. For example, bodypix's graph model has an input named sub_2, then users could add 'sub_2Shape=1,1,1,3' in the URL to populate its shape.

  • Environment related parameters:

    backend: same as backend
    localBuild: local build name list, separated by comma. The name is in short form (in general the name without the tfjs- and backend- prefixes, for example webgl for tfjs-backend-webgl, core for tfjs-core). Example: 'webgl,core'.
    run: same as numRuns
    task: correctness to "Test correctness" or performance to "Run benchmark"
    warmup: same as numWarmups