How to train a fastai model and run it on iOS

This post covers an end-to-end example project of training a resnet model with fastai and PyTorch, converting it to CoreML and running it inside a react-native iOS app.

Find the notebook and code in the GitHub repository!

My last article covered how to train a model in fastai and convert it to ONNX to run it in the browser using onnxjs and React.js. This post goes one step further and into a slightly different direction: The goal is to create a react-native iOS app, that performs the classification of dog breeds locally on the iPhone using CoreML. If you did not read my last article yet, I would recommend you catch up, because the following will assume you know about the training and exporting to ONNX parts already.

Converting the model from ONNX to CoreML

See the notebook section “Export to ONNX” and onwards.

PyTorch has built-in support for export to ONNX. For details see my previous blog post. Going from ONNX to CoreML requires coremltools and onnx-coreml. At first glance you might think that exporting using onnx-coreml is simply, but at least in this particular case, it wasn’t so straigt forward. Firstly, you need to keep in mind how you normalize your input. In other frameworks it is common to perform normalization before input data is given to the model. In CoreML, at least to my limited knowledge (see this onnx-coreml issues for reference), it seems that normalization is in the input layer of the CoreML model. When we naively export a PyTorch model, it does not have such a normalization input layer.

Tip: Use Netron to visualize your model architecture.

When you convert the ONNX model to CoreML, you need to provide an image_input_names parameter so the model knows which layer is the input layer that receives raw image data. This parameter takes a list of strings, which in this case is just the name of the first layer. But what is that name? To find that out quickly and easily, I used Netron. This program allows you to visualize your ONNX network easily and displays the name of the layer when you click on it.

Back to normalization: The bias can be accomplished by providing the preprocessing_args to the convert function of onnx-coreml, but to scale the inputs correctly, we need to add a layer to the beginning of our (now ONNX) model. To do that we first convert the ONNX model to CoreML using onnx-coreml and then use coremltools to load and modify this CoreML model. This is explained more in-depth in this notebook.

Integrating the model in a react-native iOS app

First, I tried to use the react-native-coreml native module. Unfortunately, I couldn’t get it work. So I borrowed from it and the react-native docs on Native Modules to create my own native module that did exactly why and I needed and no more.

One more take away: If you can, stick to Objective-C instead of using Swift via a bridge like I did. It seems like the necesary Swift libraries blow up the bundle size by about 200MB.

For the react-native app, I used react-native-paper for a few UI elements and react-native-camera to capture pictures. react-native-camera needs a little more effort than one would think. It exposes a very bare-bones API, which is super flexible, but does not provide a lot of convenience out-of-the-box. For example, you need to design and code your own trigger button. All that the plugin provides is the raw camera signal and APIs to take a picture or record video. Fortunately, they have examples, but you have to know to look there.

Feedback welcome

I could and want to say a lot more about the joys and pains of making this little project, but I also want to get this information out there as soon as possible. So if you have any specific questions, suggestions or recommendations, please do not hesitate to contact me on twitter @davidpfahler. I almost certainly made mistakes in the implementation or explanation, so if you find any, please let me know!

I also created a thread on the fastai forums, so if you are active there, please join the discussion!


I want to thank first and foremost the amazing team at and the community at the forums, who are always most helpful. This little pet project obiously stands on the shoulders of (open-source) giants. Thanks to all of them!


David Pfahler

I am a German trainee lawyer, software engineer and data scientist. I write about these topics here.

Read More