Editor’s Note: This tutorial is for demonstration purposes only. All computer vision-based surveillance systems need to be designed and implemented with robust ethical standards, and models should be built in ways that minimize bias. Additionally, for complete guidance on Covid-19 public health standards/guidance, please refer to the documentation provided by the World Health Organization.
In places where the wearing of a mask is essential, it could be helpful to identify when people are wearing and are not wearing masks. Here, we’re going to work with an on-device machine learning model to detect if a person in an image is wearing a mask or not. And, we are going to do this in the Flutter mobile ecosystem.
This tutorial article aims to demonstrate how to use TensorFlow’s lightweight ML library (TensorFlow Lite) to perform image classification in order to complete the aforementioned task.
In order to do any sort of image classification, we need a pre-trained model. If you’d rather, you can also try training your own model using Google’s Teachable Machine—a service provided by TensorFlow—or other no-code model building tools like Fritz AI.
So, let’s get started!
Create a new Flutter project
First, we need to create a new Flutter project. For that, make sure that the Flutter SDK and other Flutter app development-related requirements are properly installed. If everything is set up, we can simply run the following command in the desired local directory to set up a new project:
flutter create maskDetection
After the project has been set up, we can navigate inside the project directory and execute the following command in the terminal, to run the project in either an available emulator or on an actual device:
flutter run
After successfully building the app, we will get the following result in the emulator screen:
Creating an Image View on the Screen
Here, we’re going to implement the UI to fetch an image from the device library and display it on the app screen. For this, we’re going to make use of the Image Picker library. This library offers modules to fetch the image and video source from the device camera, gallery, etc.
First, we need to install the image_picker
library. For that, we need to copy the text provided in the following code snippet and paste it into the pubspec.yaml file of our project:
image_picker: ^0.6.7+14
Now, we need to import the necessary packages in the main.dart file of our project:
import 'dart:io';import 'package:flutter/material.dart'; import 'package:image_picker/image_picker.dart';
In main.dart
file, we’ll have the MyHomePage
stateful widget class. In this class object, we need to initialize a constant to store the image file once fetched. Here, we’re going to do that in the _imageFile
File type variable:
File _imageFile;
Now, we’re going to implement the UI, which will enable users to pick and display a given image. The UI will have an image view section and a button that allows users to pick an image from their gallery. The overall UI template is provided in the code snippet below:
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text("Mask Detection")
),
body: Center(
child: Column(
children: [
Container(
margin: EdgeInsets.all(15),
padding: EdgeInsets.all(15),
decoration: BoxDecoration(
color: Colors.white,
borderRadius: BorderRadius.all(
Radius.circular(15),
),
border: Border.all(color: Colors.white),
boxShadow: [
BoxShadow(
color: Colors.black12,
offset: Offset(2, 2),
spreadRadius: 2,
blurRadius: 1,
),
],
),
child: (_imageFile != null)?
Image.file(_imageFile) :
Image.network('<https://i.imgur.com/sUFH1Aq.png>')
),
RaisedButton(
onPressed: (){
},
child: Icon(Icons.camera)
),
],
),
),
);
}
Here, we have used a Container
widget with a card-like style for the image display. We’ve also used conditional rendering to display a placeholder image until the actual image is selected and loaded to the display. We have used a RaisedButton
widget to render a button just below the image view section.
Hence, we’ll get the result as shown in the emulator screenshot below:
Function to fetch and display the image
Now, we’re going to implement a function that enables users to open the gallery, select an image, and then show the image in the image view section. The overall implementation of the function is provided in the code snippet below:
Future selectImage() async {
final picker = ImagePicker();
var image = await picker.getImage(source: ImageSource.gallery, maxHeight: 300);
setState(() {
if (image != null) {
_imageFile = File(image.path);
} else {
print('No image selected.');
}
});
}
Here, we have initialized the ImagePicker
instance and used the getImage
method provided by it to fetch the image from the gallery to the image
variable. Then, we have set the _imageFile
state to the result of the fetched image using the setState
method. This will cause the main build
method to re-render and show the image on to the screen.
Now, we need to call the selectImage
function in the onPressed
property of the RaisedButton
widget, as shown in the code snippet below:
RaisedButton(
onPressed: (){
selectImage();
},
child: Icon(Icons.camera)
),
Hence, we will get the result as shown in the demo below:
As we can see, as soon as we select the image from the gallery the selected image is shown on the screen instead of the placeholder image.
Mobile machine learning is among the most cutting-edge and fastest-growing technologies. But how does it work, and what can you actually build with it? Our (free) ebook tackles these questions head on.
Performing Image Classification with TensorFlow
Now, it’s time to configure our mask image classification model. For that, we are going to use the model that was trained using TensorFlow’s Teachable Machine. The accuracy result may vary depending on the shape, size, and color of the mask in a given image.
Here, I’ve provided the trained model for you. You can go ahead and download the model files from this link. If you’d prefer, you can also train your own model using Teachable Machine. The model in this tutorial was trained on images of masked and unmasked people with label tags.
Once downloaded, we will get two files:
- mask_model_unquant.tflite
- mask_model_labels.txt
The labels file here can only distinguish between a masked and unmasked person.
We need to move the two files provided to the ./assets folder in the main project directory:
Then, we need to enable access to assets files in pubspec.yaml:
assets:
- assets/mask_model_labels.txt
- assets/mask_model_unquant.tflite
Installing TensorFlow Lite
Here, we are going to install the TensorFlow Lite package. It is a Flutter plugin that allows you to access all the TensorFlow Lite APIs. This library supports image classification, object detection, Pix2Pix and Deeplab, and PoseNet on both iOS and Android platforms.
In order to install the plugin, we need to add the following line to the pubspec.yaml file of our project:
tflite: ^1.1.1
For Android, we need to add the following setting to the android object of the ./android/app/build.gradle file:
aaptOptions {
noCompress 'tflite'
noCompress 'lite'
}
Here, we need to check to see if the app builds properly by executing flutter run
command.
If an error occurs, we may need to increase the minimum SDK version to ≥19 in ./android/app/build.gradle file for tflite
plugin to work.
minSdkVersion 19
Once the app builds properly, we can use the TensorFlow Lite package in our Flutter project.
Using TensorFlow for Image classification
To get our model working, we first need to import the TFLite package into our main.dart file, as shown in the code snippet below:
import 'package:tflite/tflite.dart';
Loading the Model
Next, we need to load the model files into the app. For that, we are going to configure a function called loadModel
. Then, by making use of the loadModel
method provided by the Tflite
instance, we’re going to load our model files in the assets folder into our app. We need to set the model
and labels
parameter inside the loadModel
method, as shown in the code snippet below:
Future loadModel() async {
Tflite.close();
String result;
result = await Tflite.loadModel(
model: "assets/mask_model_unquant.tflite",
labels: "assets/mask_model_labels.txt",
);
print(result);
}
Next, we need to call the function inside the initState
method so that the function triggers as soon as we enter the screen:
@override
void initState() {
super.initState();
loadModel();
}
Perform Image classification
Now, we’re going write the code to actually perform the image classification. First, we need to initialize a variable to store the result of each classification, as shown in the code snippet below:
List _classifiedResult;
This _classifiedResult
List type variable will store the result of each classification.
Next, we need to devise the function called classifyImage
that takes an image
file as a parameter. The overall implementation of the function is provided in the code snippet below:
Future classifyImage(image) async {
_classifiedResult = null;
// Run tensorflowlite image classification model on the image
print("classification start $image");
final List result = await Tflite.runModelOnImage(
path: image.path,
numResults: 6,
threshold: 0.05,
imageMean: 127.5,
imageStd: 127.5,
);
print("classification done");
setState(() {
if (image != null) {
_imageFile = File(image.path);
_classifiedResult = result;
} else {
print('No image selected.');
}
});
}
Here, we have used the runModelOnImage
method provided by the Tflite
instance to classify the selected image. As parameters, we have passed the image path, result quantity, classification threshold, and other optional configurations for better classification. After a successful classification, we have set the result to the _classfiedResult
list.
Now, we need to call the function inside the selectImage
function and pass the image
file as a parameter, as shown in the code snippet below:
Future selectImage() async {
final picker = ImagePicker();
var image = await picker.getImage(source: ImageSource.gallery, maxHeight: 300);
classifyImage(image);
}
This will allow us to set the image to the image view as well as classify it as soon as we select it from the gallery.
Now, we need to configure the UI template to display the results of the classification. We are going to show the result of classification in card style as a list, just below the RaisedButton
widget.
The implementation of the overall UI of the screen is provided in the code snippet below:
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text("Cat Dog Identifier"),
),
body: Center(
child: Column(
children: [
Container(
margin: EdgeInsets.all(15),
padding: EdgeInsets.all(15),
decoration: BoxDecoration(
color: Colors.white,
borderRadius: BorderRadius.all(
Radius.circular(15),
),
border: Border.all(color: Colors.white),
boxShadow: [
BoxShadow(
color: Colors.black12,
offset: Offset(2, 2),
spreadRadius: 2,
blurRadius: 1,
),
],
),
child: (_imageFile != null)?
Image.file(_imageFile) :
Image.network('<https://i.imgur.com/sUFH1Aq.png>')
),
RaisedButton(
onPressed: (){
selectImage();
},
child: Icon(Icons.camera)
),
SizedBox(height: 20),
SingleChildScrollView(
child: Column(
children: _classifiedResult != null
? _classifiedResult.map((result) {
return Card(
elevation: 0.0,
color: Colors.lightBlue,
child: Container(
width: 300,
margin: EdgeInsets.all(10),
child: Center(
child: Text(
"${result["label"]} : ${(result["confidence"] * 100).toStringAsFixed(1)}%",
style: TextStyle(
color: Colors.black,
fontSize: 18.0,
fontWeight: FontWeight.bold),
),
),
),
);
}).toList()
: [],
),
),
],
),
),
);
}
Here, just below the RaisedButton
widget, we have applied the SingleChildScrollView
widget so that the content inside it is scrollable. Then, we’ve used the Column
widget to list out the widgets inside it horizontally.
Inside the Column
widget, we’ve mapped the result of the classification using the map
method and displayed the result in a percentage format inside the Card
widget.
Hence, we will get the result as shown in the demo below:
We can see that as soon as we select the image from the gallery that the classification result is displayed on the screen as well.
At last, we have successfully implemented the mask detection in a Flutter app using TensorFlow Lite.
Conclusion
In this tutorial, we were able to accurately classify whether not a person in a given image was wearing a face mask. The overall process was simplified and made easier due to the availability of the TensorFlow Lite library for Flutter, as well as a pre-trained classification model.
The model files were made available for this tutorial. But as previously noted, you can create your own trained models using the Teachable Machine service provided by TensorFlow. This library offers a state-of-the-art service in terms of utilizing ML applications in the Flutter and other mobile applications.
Now, the challenge could be to train your own model and see if you can make it as accurate or even more accurate. The TensorFlow library is capable of other machine learning processes like object detection, pose estimate, gesture detection, and others, which you should definitely try out, as well.
The overall coding implementation is available on GitHub.
Written by Krissanawat Kaewsanmuang