Real Time Object Detection Android

Date:

Install Tensorflow Object Detection Api And Dependencies

Tensorflow Android: Real-time Object Detection in 6 steps

Once we have project setup, Tensorflow Object Detection API should now be located in rf-models/research/object_detection, the code base is currently maintained by the community and we will call the module from there for our model training later.

For any further works on top of the Tensorflow Object Detection API code base, check outmodel_main.pyand model_lib.pyas a start point.

Now we need to install the rest of the dependencies. To install the required python libraries:

To install COCO API

Note: If you are having issues compiling COCO API, make sure youve installed Cython and Numpy before compiling.

To install Protobufs Tensorflow Object Detection API uses Protobufs to configure model and training parameters.

Add Tensorflow Libraries to PYTHONPATH

When running locally, the rf-models/research/ and rf-models/research/slim directories need to be appended to PYTHONPATH in order to add python modules from TensorFlow Object Detection API to the search path and they will be called from model scripts in later stages.

Note: The above scripts needs to run from every new terminal environment. Alternatively, you can add in your~/.bashrc file with the absolute path as a permenant solution.

Configure The Object Detector

To detect and track objects, first create an instance of ObjectDetector andoptionally specify any detector settings that you want to change from thedefault.

  • Configure the object detector for your use case with anObjectDetectorOptions object. You can change the followingsettings:

    Object Detector Settings
  • Final Step: Turn On Detector With Images

    Now plug in the detection function to the captured image! In onActivityResult function, call runObjectDetection after you send the same image for displaying

    // TODO: run through ODT and display resultrunObjectDetection

    Now compile, run the codelab by clicking Run in Android Studio toolbar look at the logcat window inside IDE, you should see something similar to:

    D/MLKit-ODT: Detected object: 0 D/MLKit-ODT:   Category: 1D/MLKit-ODT:   trackingId: nullD/MLKit-ODT:   entityId: /g/11g0srqwrgD/MLKit-ODT:   size:  - D/MLKit-ODT: Confidence: 91%D/MLKit-ODT: Detected object: 1 D/MLKit-ODT:   Category: 0D/MLKit-ODT:   trackingId: nullD/MLKit-ODT:   entityId: /m/0bl9fD/MLKit-ODT:   size:  - 

    which means that detector saw 2 objects of:

    • Category is 1
    • no trackingId
    • with size inside rectangle of —
    • detector is pretty confident that the 1st is a Home Good
    • entityId also there, this codelab does not use it.

    Technically that is all that you need to get ML Kit Object Detection to work: you got it all at this moment! Congratulations! Yeah, on UI side, you are still at the stage when you started, but you could make use of the detected results on UI such as drawing out the bounding box to create a better experience: let’s go to the next step — post process the detected results!

    In previous steps, you print the detected result into logcat: simple and fast. In this section, you would make use of the result into the image:

    You May Like: Allstate Motor Club App For Android

    Why This Is Useful

    With the machine learning feature behind this basic app, there are ton of different features you can create :

    • Alert the owners with a text message if the dog walker hasnt returned.
    • Record a message telling your dog to Sit down! when theyre running around the room with no one around. I bet you could capture funny photos of your dog in this moment, too.
    • Show the user a message when a cat / dog is detected
    • Sound an alarm when the camera detects cats .

    Of course, not many people have a spare Android tablet / phone that they can use as an expensive pet monitoring camera, but this is just a simple example among many different possibilities for how you might create an app with object detection using Fritz. I cant wait to see what all the creative developers of the world build using object detection.

    Got an idea? Leave a comment!

    Im a lead developer at Fritz specializing in mobile machine learning. If youre looking to create features powered by AI/ML, we offer prebuilt APIs and custom model support.

    If this article was helpful, tweet it.

    Learn to code for free. freeCodeCamp’s open source curriculum has helped more than 40,000 people get jobs as developers. Get started

    freeCodeCamp is a donor-supported tax-exempt 501 nonprofit organization

    Donations to freeCodeCamp go toward our education initiatives, and help pay for servers, services, and staff.

    You can make a tax-deductible donation here.

    How To Train A Custom Mobile Object Detection Model

    Real

    In this post, we walk through how to train an end to end custom mobile object detection model. We will use the state of the art YOLOv4 tiny Darknet model and convert to TensorFlow Lite for on-device inference.

    Short on time? Jump to the Mobile Object Detection Colab with YOLOv4-tiny Notebook.

    We will take the following steps to go from data to deploy:

    • Convert Darknet Model to TensorFlow Lite
    • Export Weights for Future Inference
    • Deploy on Device

    You May Like: Mixing And Mastering App For Android

    Analysis Of Deep Learning Mobile Platforms

    This section analyses the interesting different popular machine learning frameworks deployed on portable platforms with a focus on smartphones. The investigation includes four open-source machine learning libraries including OpenCV, TFM, TensorFlow Lite and Qualcomm Neural Processing Software Development Kit . Other platforms such as Keras were not considered, because it creates an abstraction layer over TensorFlow increasing the overhead and losing the control over complex architectures. For instance, Google API was not considered, because it is prepared with a set of ML models already included in TensorFlow, making Google API easy-to-use for non experts developers. Due to the focus of the study being on the object recognition process and this process being the most resource-intensive task, state-of-the-art mature and consolidated ML platforms were selected.

    In the following subsections, the deployment of different platforms is thoroughly explained according to Fig. 1.

    Background On Yolov4 Darknet And Tensorflow Lite

    YOLOv4 Darknet is currently the most accurate performant model available with extensive tooling for deployment. It builds on the YOLO family of realtime object detection models with a proven track record that includes the popular YOLOv3. Because we are deploying on a lower-end device, we will trade some accuracy for higher framerate by using the YOLOv4 variant YOLOv4 Tiny. For higher accuracy at lower framerate, check out the larger model in How To Train YOLOv4.

    We will train YOLOv4 in the Darknet framework because construction for a stable TensorFlow model is still underway. Though not especially easy to use, Darknet is a very powerful framework that is usually used to train YOLO models.

    To deploy on-device, we will use TensorFlow Lite, Google’s official framework for on-device inference. We will convert the weights from Darknet to a TensorFlow SavedModel, and from that to TensorFlow Lite weights. We will finally drop these weights into an app, ready to be configured, tested, and used in the real world.

    With context covered, let’s get started! We recommend reading this post side-by-side with the Mobile Object Detection Colab Notebook.

    You May Like: Best Parental Monitoring App For Android

    Real Time Object Detection With Tensorflow On Android

    Read also about Explainable Object Detection

    Automatic object detection based on deep learning has enormous potential. As a result, it can make a significant contribution in the future. We apply object detection in areas such as the monitoring of industrial manufacturing processes. Other areas are driver assistance systems or support in diagnostics in healthcare. However, to enable large-scale industrial use, we need to be able to apply these methods on resource-limited devices. Therefore, we demonstrate one possible use case with the described application. Our example enables real-time object detection with high quality on Android devices using TensorFlow.

    Advances in machine learning and artificial intelligence are enabling revolutionary methods in computer vision and text analysis. The new trend of deep learning uses methods with numerous layers between input layer and output layer. With this, the machine learning model learns to recognize more complex patterns in data, through artificial neural networks . A possible application field for these models is, for instance, image classification. Here, it is even possible to recognize multiple objects within an image. The recent advancement allow us to use deep learning algorithms on mobile devices and thus real-time object detection. In order to illustrate this, we give a brief overview of how we can deploy a real-time mobile object detector on an Android device.

    Introducing Firebase Ml Kit Object Detection Api

    Real-time Object Detection Android App Using Tensorflow Lite(GPU) and OpenCV: Loading Model Part 1

    Earlier this month at Google I/O, the team behind Firebase ML Kit announced the addition of 2 new APIs into their arsenal: object detection and an on-device translation API.

    This article focuses on the object detection API, and well look into how we can detect and track objects in real-time using this API without using any network connectivity! Yes, this API uses on-device machine learning to perform object detection.

    Just togive you a quick sense of what could be possible with this API, imagine a shopping app that tracks items in the camera feed in real time and shows the user items similar to those on the screenperhaps even suggesting recipes to go along with certain ingredients!

    For this blog post, well be building a demo app that detects items from a real-time camera feed.

    Recommended Reading: How To Archive Text Messages On Android Phone

    What Is Object Detection

    Object Detection is the process of finding real-world object instances like car, bike, TV, flowers, and humans in still images or Videos. It allows for the recognition, localization, and detection of multiple objects within an image which provides us with a much better understanding of an image as a whole. It is commonly used in applications such as image retrieval, security, surveillance, and advanced driver assistance systems .

    Object Detection can be done via multiple ways:

    • Feature-Based Object Detection
    • SVM Classifications with HOG Features
    • Deep Learning Object Detection

    In this Object Detection Tutorial, well focus on Deep Learning Object Detection as Tensorflow uses Deep Learning for computation.

    Lets move forward with our Object Detection Tutorial and understand its various applications in the industry.

    Convert To Tensorflow Lite

    Once we have a trained / partially trained model, to deploy the model for mobile devices, we need to firstly use TensorFlow Lite to convert the model to a lightweight version which is optimized for mobile and embedded devices. It enables on-device machine learning inference with low latency and smaller binary size. It uses techniques like quantized kernels for smaller and faster models.

    At this time only SSD models are supported. Models like faster_rcnn are not supported at this time.

    Also Check: Best App To Print Pictures From Android Phone

    Implementation Of Permission Dispatcher

    Implement permission disptcher for camera permission requests.

    MainActivity.kt

    Add each annotation to the target class and method and build once.A function for the permission request is automatically generated.

    Change the setupCamera method earlier as follows so that it will be called from the permission request result.In addition, this time we will not implement the processing such as when it is rejected.

    MainActivity.kt

    override fun onCreate override fun onRequestPermissionsResult 

    This completes the implementation of camera preview related items.Next, implement image analysis use cases, model loading, result display, and so on.

    Starting The Camera Preview

    Real

    This is super simple: go to your activity inside the onCreate method, simply set the lifecycle owner for cameraView, and the library handles the rest of the work for you.

    At this point, if you run the app, you should see a screen that shows you a continuous preview of your Camera.

    Next up, well be adding a method that gives us these preview frames so we can perform some machine learning magic on them!

    Recommended Reading: How To Live Stream On Android Phone

    Initiate The Tflite Interpreter With Modal File And Metadata

    On Android Side, Included the Tensorflow Lite AAR to the dependencies and run the gradle build.

    object_detector/android/app/build.gradle

    Based on the TensorFlow Lite Android Example, I have done following things to setup TFLite Interpreter for running the modal,

    • Read the modal file from the asset as ByteBuffer and initiated the Interpreter with it.
    • Based on meta data initiated the input and output buffer object to use it in modal run.
    • Also, initialize other metadata required for post-processing the output data.

    Here is the constructor method with initialization of above.

    See here for main.dart , MainActivity.java and YoloDetector.java code

    All set! Let’s do flutter run.

    Flutter and TFLite setup are done. Now we can run the tensorflow using the Image Stream provided by flutter camera plugin.

    How To Choose A Model To Customize

    Each model comes with its own precision and latencycharacteristics. You should choose a model that works the best for your use-caseand intended hardware. For example, theEdge TPUmodels are ideal for inference on Google’s Edge TPU on Pixel 4.

    You can use ourbenchmark tool toevaluate models and choose the most efficient option available.

    Recommended Reading: Most Unique Apps For Android

    Motivation And State Of Art

    Humans learn to recognize objects or humans by learning starting from their birth. Same idea has been utilized by incorporating the intelligence by training into a camera using neural networks and TensorFlow. This enables to have the same intelligence in cameras, which can be used as an artificial eye and can be used in many areas such as surveillance, detection of objects/things etc.,

    Visualize The Ml Kit Detection Result

    Real-time Object Detection Android App Using Tensorflow Lite(GPU) and OpenCV: Predict Object Part 2

    Use the visualization utilities to draw the ML Kit object detection result on top of the input image.

    Go to where you call debugPrint and add the following code snippet below it:

    // Parse ML Kit's DetectedObject and create corresponding visualization dataval detectedObjects = it.map , $%"    }    BoxWithText}// Draw the detection result on the input bitmapval visualizedResult = drawDetectionResult// Show the detection result on the app screenrunOnUiThread 
    • You start by parsing the ML Kit’s DetectedObject and creating a list of BoxWithText objects to display the visualization result.
    • Then you draw the detection result on top of the input image, using the drawDetectionResult utility method, and show it on the screen.

    Read Also: Send Large Video From Iphone To Android

    Final Code With All The Changes:

    Now with this, we come to an end to this Object Detection Tutorial. I Hope you guys enjoyed this article and understood the power of Tensorflow, and how easy it is to detect objects in images and video feed. So, if you have read this, you are no longer a newbie to Object Detection and TensorFlow. Try out these examples and let me know if there are any challenges you are facing while deploying the code.

    Now that you have understood the basics of Object Detection, check out the Tensorflow Certification

    Stay ahead of the curve in technology with Edurekas Post Graduate Program in AI and Machine Learning in partnership with E& ICT Academy, National Institute of Technology, Warangal. This Artificial Intelligence Course is curated to deliver the best results.

    Got a question for us? Please mention it in the comments section of Object Detection Tutorial and we will get back to you.

    Analysis Of Previous Smartphone Object Recognition Work

    In summary, none of the above studies has covered a thorough analysis of the popular machine learning frameworks to provide a novel architecture for object recognition systems in constrained environments. For the first time, this paper provides a novel architecture for object recognition on constrained environments considering all the metrics that affect the system performance.

    Also Check: What Is Ar Zone On Android

    Access Android Code With Flutter Platform Channel

    Using Flutter Platform Integration, we can write platform specific code and access them using channel binding. Look here for more details.

    On Flutter Side, I have updated the main.dart file to access the platform method by passing the .lite file asset path and json meta data read from .meta file containing yolo configuration. Also appended post processing config like max result and confidence threshold to filter.

    On Android Side, I have added a method binding with arguments as file path of the modal and meta file in MainActivity file.

    For accessing the flutter assets files from android side, we need the lookup path for the platform side. We can get it using getLookupKeyForAsset method of FlutterView.

    ***using Your Own Data***

    TensorFlow Lite Object Detection Demo 2019 for Android ...

    To export your own data for this tutorial,sign up for Roboflow and make a public workspace, or make a new public workspace in your existing account. If your data is private, you can upgrade to a paid plan for export to use external training routines like this one or experiment with using Roboflow’s internal training solution.

    If you havent yet, for a free account with Roboflow. Then upload your dataset.

    Drag and drop to upload data to Roboflow in any format

    After upload you will be prompted to choose options including preprocessing and .

    The augmentation and preprocessing selections for our

    After selecting these options, click Generate and then Download. You will be prompted to choose a data format for your export. Choose YOLO Darknet format.

    Note: in addition to converting data to YOLO Darknet in Roboflow, you can make sure your dataset is healthy with the Roboflow’s dataset health check, preprocess your images, and generate a larger dataset via augmentations.

    After export, you will receive a curl link to download your data into our training notebook.

    That’s it to get your data into Colab, just several lines of code!

    You May Like: Sending Large Video Files From Android To Iphone

    Converting Tensorflow Format To Tensorflow Lite

    We can’t use the tensorflow .pb with TensorFlow Lite as it use TensorFlowLite uses FlatBuffers format while TensorFlow uses Protocol Buffers.

    With TensorFlow Python installation, we get tflite_convert command line script to convert TensorFlow format to the TFLite format .

    pip install --upgrade  "tensorflow==1.7.*"tflite_convert --help

    The primary benefit of FlatBuffers comes from the fact that they can be memory-mapped, and used directly from disk without being loaded and parsed.

    We need to define the input array size, I set to 1 x 416 x 416 x 3 based on yolo model input configuration . You can get all the meta information from the .meta json file.

    tflite_convert \

    On Flutter Side, Enable assets folder in flutter configuration.

    On Android Side, The .lite file will be memory-mapped, and will not work when the file is compressed. Android Asset Packaging Tool will compress all the assets. We need to mention not to compress the .lite in the build.gradle inside android block.

    object_detector/android/app/build.gradle

    Share post:

    Popular

    More like this
    Related

    What Is Digital Secure App On Android

    Best Password...

    How To Develop An App For Android Free

    Android App...

    Remote Control Android Phone Over Wifi

    Best Remote...

    Sign Documents On Android Phone

    How To...