Developing a face detection application using Flutter

 We will be using the Firebase ML Kit Face Detection API to detect the faces in an image. The key features of the Firebase Vision Face Detection API are as follows:

  1. Recognize and return the coordinates of facial features such as the eyes, ears, cheeks, nose, and mouth of every face detected.
  2. Get the contours of detected faces and facial features.
  3. Detect facial expressions, such as whether a person is smiling or has one eye closed.
  4. Get an identifier for each individual face detected in a video frame. This identifier is consistent across invocations and can be used to perform image manipulation on a particular face in a video stream. Let’s begin with the first step, adding the required dependencies.

Add Firebase to Flutter

Adding the pub dependencies

We start by adding the pub dependencies. A dependency is an external package that is required for a particular functionality to work. All of the required dependencies for the application are specified in the pubspec.yaml file. For every dependency, the name of the package should be mentioned. This is generally followed by a version number specifying which version of the package we want to use. Additionally, the source of the package, which tells pub how to locate the package, and any description that the source needs to find the package can also be included.

To get information about specific packages, visit https://pub.dartlang. org/packages.

The dependencies that we will be using for this project are as follows:

  1. firebase_ml_vision: A Flutter plugin that adds support for the functionalities of Firebase ML Kit
  2. image_picker: A Flutter plugin that enables taking pictures with the camera and selecting images from Android or iOS image library

Here’s what the dependencies section of the pubspec.yaml file will look like after including the dependencies:

firebase_ml_vision: ^0.9.2+1  
image_picker: ^0.6.1+4

In order to use the dependencies that we have added to the pubspec.yaml file, we need to install them. This can simply be done by running flutter pub get in the Terminal or clicking Get Packages, which is located on the right side of the action ribbon at the top of the pubspec.yaml file

Building the application

Now we build the application. The application, named Face Detection, will consist of two screens. The first one will have a text title with two buttons, allowing the user to choose an image from the device’s picture gallery or take a new image using the camera. After this, the user is directed to the second screen, which shows the image that was selected for face detection highlighting the detected faces. The following screenshot shows the flow of the application:

Image for post

Creating the first screen

In the following sections, we will build each of these elements, called widgets, and then bring them together under a scaffold.

In English, scaffold means a structure or a platform that provides some support. In terms of Flutter, a scaffold can be thought of as a primary structure on the device screen upon which all the secondary components, in this case widgets, can be placed together.

In Flutter, every UI component is a widget. They are the central class hierarchy in the Flutter framework. If you have worked previously with Android Studio, a widget can be thought of as a TextView or Button or any other view component.

Building the row title

Then is building the row title. We start by creating a stateful widget, FaceDetectionHome, inside the face_detection_home.dart file. FaceDetectionHomeState will contain all the methods required to build the first screen of the application. Let’s define a method called buildRowTitle() to create the text header:

Widget buildRowTitle(BuildContext context, String title) { 
return Center( child: Padding(
padding: EdgeInsets.symmetric(horizontal: 8.0, vertical: 16.0),
child: Text( title, style: Theme.of(context).textTheme.headline, ), //Text ) //Padding ); //Center
}

The method is used to create a widget with a title using the value that is passed in the title string as an argument. The text is aligned to the center horizontally by using Center() and is provided a padding of 8.0 horizontally and 16.0 vertically using EdgeInsets.symmetric(horizontal: 8.0, vertical: 16.0). It contains a child, which is used to create the Text with the title. The typographical style of the text is modified to textTheme.headline to change the default size, weight, and spacing of the text.

Flutter uses the logical pixel as a unit of measure, which is the same as device-independent pixel (dp). Further, the number of device pixels in each logical pixel can be expressed in terms of devicePixelRatio. For the sake of simplicity, we will just use numeric terms to talk about width, height, and other measurable properties.

Building the row with button widgets

  1. We start by defining createButton() to create buttons with all the required properties:
Widget createButton(String imgSource) { return Expanded( child: Padding( padding: EdgeInsets.symmetric(horizontal: 8.0), child: RaisedButton( color: Colors.blue, textColor: Colors.white, splashColor: Colors.blueGrey, onPressed: () { onPickImageSelected(imgSource); }, child: new Text(imgSource) ), ) ); } 

The method returns a widget, that is, RaisedButton, after providing a horizontal padding of 8.0. The color of the button is set to blue and the color of the button text is set to white. splashColor is set to blueGrey to indicate that the button is clicked by producing a rippling effect.

The code snippet inside onPressed is executed when the button is pressed. Here, we make a call to onPickImageSelected(), which is defined in a later section of the chapter. The text that is displayed inside the button is set to imgSource, which, here, can be the gallery or the camera. Additionally, the whole code snippet is wrapped inside Expanded() to make sure that the created button completely occupies all the available space.

2. Now we use the buildSelectImageRowWidget() method to build a row with two buttons to list the two image sources:

Widget buildSelectImageRowWidget(BuildContext context) { return Row( children: [ createButton(‘Camera’), createButton(‘Gallery’)
], ); }

3. Now, let’s define onPickImageSelected(). This method uses the image_picker library to direct the user either to the gallery or the camera to get an image:

void onPickImageSelected(String source) async { var imageSource; if (source == ‘Camera’) { imageSource = ImageSource.camera; } else { imageSource = ImageSource.gallery; } final scaffold = _scaffoldKey.currentState; try { final file = await ImagePicker.pickImage(source: imageSource); if (file == null) { throw Exception(‘File is not available’); } Navigator.push( context, new MaterialPageRoute( builder: (context) => FaceDetectorDetail(file)), ); } catch (e) { scaffold.showSnackBar(SnackBar( content: Text(e.toString()), )); } } 

First, imageSource is set to either camera or gallery using an if-else block. If the value passed is Camera, the source of the image file is set to ImageSource.camera; otherwise, it is set to ImageSource.gallery.

Once the source of the image is decided pickImage() is used to pick the correct imageSource. If the source was Camera, the user will be directed to the camera to take an image; otherwise, they will be directed to choose an image from the gallery.

To handle the exception if the image was not returned successfully by pickImage(), the call to the method is enclosed inside a try-catch block. If an exception occurs, the execution is directed to the catch block and a snackbar with an error message being shown on the screen by making a call to showSnackBar():

Image for post

After the image has been chosen successfully and the file variable has the required uri, the user migrates to the next screen, FaceDetectorDetail, which is discussed in the section, Creating the second screen, and using Navigator.push() it passes the current context and the chosen file into the constructor. On the FaceDetectorDetail screen, we populate the image holder with the selected image and show details about the detected faces.

Creating the whole user interface

Now, we create the whole user interface, all of the created widgets are put together inside the build() method overridden inside the FaceDetectorHomeState class.

@override Widget build(BuildContext context) {  return Scaffold(  key: _scaffoldKey,  appBar: AppBar(  centerTitle: true,  title: Text('Face Detection'),  ),  body: SingleChildScrollView(  child: Column(  children: [  buildRowTitle(context, 'Pick Image'),  buildSelectImageRowWidget(context)  ],  )  )  ); }

The text of the toolbar is set to Face Detection by setting the title inside the appBar. Also, the text is aligned to the center by setting centerTitle to true. Next, the body of the scaffold is a column of widgets. The first is a text title and the next is a row of buttons.

Creating the second screen

Next, we create the second screen. After successfully obtaining the image selected by the user, we migrate to the second screen of the application, where we display the selected image. Also, we mark the faces that were detected in the image using Firebase ML Kit. We start by creating a stateful widget named FaceDetection inside a new Dart file, face_detection.dart.

Getting the image file

First of all, the image that was selected needs to be passed to the second screen for analysis. We do this using the FaceDetection() constructor. Constructors are special methods that are used for initializing the variables of a class. They have the same name as the class. Constructors do not have a return type and are called automatically when the object of the class is created. We declare a file variable and initialize it using a parameterized constructor as follows:

File file; FaceDetection(File file){  this.file = file; }

Analyzing the image to detect faces

FirebaseVision face detector to detect the faces using the following steps:

1. First, we create a global faces variable inside the FaceDetectionState class, as shown in the following code:

List<Face> faces; 

2. Now we define a detectFaces() method, inside which we instantiate FaceDetector as follows:

void detectFaces() async{ final FirebaseVisionImage visionImage = FirebaseVisionImage.fromFile(widget.file); final FaceDetector faceDetector = FirebaseVision.instance.faceDetector(FaceDetectorOptions( mode: FaceDetectorMode.accurate, enableLandmarks: true, enableClassification: true)); List<face> detectedFaces = await faceDetector.processImage(visionImage); for (var i = 0; i<faces.length; i++) {  final double smileProbablity = detectedFaces[i].smilingProbability;  print("Smiling: $smileProb");
} faces = detectedFaces; }

We first create a FirebaseVisionImage instance called visionImage of the image file that was selected using the FirebaseVisionImage.fromFile() method. Next, we create an instance of FaceDetector by using the FirebaseVision.instance.faceDetector() method and store it in a variable called faceDetector. Now we call processImage() using the FaceDetector instance, faceDetector, which was created earlier, and pass in the image file as a parameter. The method call returns a list of detected faces, which is stored in a list variable called detectedFaces. Note that processImage() returns a list of type Face. Face is an object whose attributes contain the characteristic features of a detected face. A Face object has the following attributes:

1. getLandmark

2. hashCode

3. hasLeftEyeOpenProbability

4. hasRightEyeOpenProbability

5. headEulerEyeAngleY

6. headEylerEyeAngleZ

7. leftEyeOpenProbability

8. rightEyeOpenProbability

9. smilingProbability

Now we iterate through the list of faces using a for loop. We can get the value of smilingProbablity for the i th face using detectedFaces[i].smilingProbability. We store it in a variable called smileProbablity and print its value to the console using print(). Finally, we set the value of the global faces list to detectedFaces.

The async modifier added to the detectFaces() method enables asynchronous execution of the method, which means that a separate thread, different from the main thread of execution, is created. An async method works on callback mechanisms to return the value computed by it once the execution has been completed.

To make sure that the faces are detected as soon as the user migrates to the second screen, we override initState() and call detectFaces() from inside it:

@override void initState() {  super.initState();  detectFaces();  }

initState() is the first method that is called after the widget is created.

Marking the detected faces

Next, marking the detected faces. After detecting all the faces present in the image, we will paint rectangular boxes around them with the following steps:

  1. First we need to convert the image file into raw bytes. To do so, we define a loadImage method as follows:
void loadImage(File file) async {  final data = await file.readAsBytes();  await decodeImageFromList(data).then(  (value) => setState(() {  image = value;  }),  ); }

The loadImage() method takes in the image file as input. Then we convert the contents of the file into bytes using file.readAsByte() and store the result in data. Next, we call decodeImageFromList(), which is used to load a single image frame from a byte array into an Image object and store the final result value in the image. We call this method from inside detectFaces(), which was defined earlier.

2. Now we define a CustomPainter class called FacePainter to paint rectangular boxes around all the detected faces. We start as follows:

class FacePainter extends CustomPainter {  Image image;  List<Face> faces;  List<rect> rects = [];  FacePainter(ui.Image img, List<Face> faces) {  this.image = img;  this.faces = faces;for(var i = 0; i<faces.length; i++) {  rects.add(faces[i].boundingBox);  }  }  } }

We start by defining three global variables, image, faces, and rects. image of type Image is used to get the byte format of the image file. faces is a List of Face objects that were detected. Both image and faces are initialized inside the FacePainter constructor. Now we iterate through the faces and get the bounding rectangles of each of the face using faces[i].boundingBox and store it in the rects list.

3. Next, we override paint() to paint the Canvas with rectangles, as follows:

@override void paint(Canvas canvas, Size size) {  final Paint paint = Paint()  ..style = PaintingStyle.stroke  ..strokeWidth = 8.0  ..color = Colors.red;  canvas.drawImage(image, Offset.zero, Paint());  for (var i = 0; i

We start by creating an instance of the Paint class to describe the style to paint the Canvas, that is, the image we have been working with. Since we need to paint rectangular borders, we set style to PaintingStyle.stroke to paint just the edges of the shape. Next, we set strokeWidth, that is, the width of the rectangular border, to 8. Also, we set the color to red. Finally, we paint the image using cavas.drawImage(). We iterate through each of the rectangles for the detected faces inside the rects list and draw rectangles using canvas.drawRect().

Displaying the final image on the screen

After successfully detecting faces and painting rectangles around them, we will now display the final image on the screen. We first build the final scaffold for our second screen. We will override the build() method inside FaceDetectionState to return the scaffold as follows:

@override Widget build(BuildContext context) {  return Scaffold(  appBar: AppBar(  title: Text("Face Detection"),  ),  body: (image == null)  ? Center(child: CircularProgressIndicator(),)  : Center(  child: FittedBox(  child: SizedBox(  width: image.width.toDouble(),  height: image.width.toDouble(),  child: CustomPaint(painter: FacePainter(image, faces))  ),  ),  )  );  }

We start by creating the appBar for the screen, providing a title, Face Detection. Next, we specify the body of the scaffold. We first check the value of the image that stores the byte array of the image selected. Till the time it is null we are sure that the process of detecting faces is in progress. Therefore, we use a CircularProgressIndicator(). Once the process for detecting faces is over the user interface is updated to show a SizedBox with the same width and height as the selected image. The child property of the SizedBox is set to CustomPaint, which uses the FacePainter class we created earlier to paint rectangular borders around the detected faces.

Complete Code: Here

Post a Comment

Previous Post Next Post

Subscribe Us


Get tutorials, Flutter news and other exclusive content delivered to your inbox. Join 1000+ growth-oriented Flutter developers subscribed to the newsletter

100% value, 0% spam. Unsubscribe anytime