YOLO V3 Real-Time Object Detection for Beginners A-Z

madan
6 min readMar 3, 2020

--

steps for installing labelImg in Linux: First create a new environment, that is the good habit for every coder, I created ‘YOLO’ name for that environment.

  • Download labelImg: Download labelImg ZipFile on the Github.
  • install pyqt5-Dev-tools: Install below code
madanmaram $ sudo apt-get install pyqt5-dev-tools
  • install lxml
madanmaram $ sudo apt-get install python3-lxml

next change directory, and goto labelImg-master file Downloads

madanmaram $ cd Downloads/madanmaram~/Downloads $ ls-ltr  #it shows list of files in downloads

Next goto labelImg-master file in Downloads

madanmaram~/Downloads $ cd labelImg-mastermadanmaram~/Downloads~/labelImg-master $ make qt5py3
  • compile labelImg.py
madanmaram~/Downloads~/labelImg-master $ python3 labelImg.py
  • Next label the data and Draw Bounding Boxes to your own

What is YOLO???

“You Only Look Once” is an algorithm that uses convolutional neural networks for object detection. You only look once, or YOLO is one of the faster object detection algorithms out there. Though it is not the most accurate object detection algorithm, it is a very good choice when we need real-time detection, without loss of too much accuracy.

In comparison to recognition algorithms, a detection algorithm does not only predict class labels but detects locations of objects as well. So, It not only classifies the image into a category, but it can also detect multiple Objects within an Image. This Algorithm applies a single Neural network to the Full Image. It means that this network divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities

Prerequisites:

To fully understand this tutorial:

  • Before Deep dive into that first, You should understand how convolutional neural networks work. This also includes knowledge of Residual Blocks, skip connections, and Upsampling.
  • What is object detection, bounding box regression, IoU, and non-maximum suppression?
  • You should be able to create simple neural networks with ease.

Data Preparation XML to.TXT

We want to convert our XML file(imagenet) to.Txt file(darknet).you could use this Link to convert.
you could Add these functionalities that change Xml to Txt

Installation

sudo pip install -r requirements.txt

Usage

python xmltotxt.py -xml xml -out out

Darknet Downloading:

Darknet is an open-source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation. You can find the source on GitHub and you can clone or download from here

open makefile in Darknet folder using

gedit Makefile

Next, In Makefile change GPU=1,CUDNN=1,and OPENCV = 1, Next Download the pre-trained model Link

Cloning GitHub repo steps into Drive from Google Colab

first, you clone or download the file from here. then you will upload that files into Google Drive and next

* Connecting Google Colab with Google Drive

What is Google Colaboratory?

let’s see Here, Google Colaboratory is known as Colab. This is a cloud service, and now Google Colab supports GPU and TPU!

Using Colab, you can:

  • Enhance your Python programming language coding skills
  • Develop excellent deep learning models using most popular libraries like TensorFlow, Keras, PyTorch, and OpenCV.
  • Do anything without much worrying about packages, libraries, and their installation.
  • Here is Colab most of the libraries are pre-installed that makes it easy to use, libraries which are not pre-installed here can be installed with a simple command

Loading Your Data into Google Colaboratory.

One thing that makes Colab the best of all is that it comes with various libraries that help in accessing lots of Services provided by Google itself. Colab saves all your Jupyter Notebook to Google Drive, and you can share your Jupyter Notebooks very efficiently anywhere.

But the problem arises when we have to work with huge Dataset, As google colab also provides many ways to upload your data to its Virtual Machine on which your code is running. But as soon as you got disconnected all of your Data is lost when you reconnect to a new Virtual Machine that is offered to you.

I’m here to help you with this problem of uploading your data to colab again and again.

To avoid this problem, follow these steps:

1. First of all, Upload your Data to your Google Drive.

2. Run the following script in colab shell.

#Start by connecting drive into the google colab

Connect COLAB with Google Drive

from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My Drive/Colab Notebooks/Yolo_pytorch_custom_data-master

Copy the authorization code of your account.

Check your GPU compatibility with CUDA, as you can see COLAB GPU has CUDA 10.0 installed

Load dataset and darknet folder to COLAB working space by unzip file “darknet.zip”

  • Use syntax !unzip + <path_to_darknet.zip_in_your_GDrive> to extract folder darknet. In my case, “darknet.zip” locates in My Drive/Sharing_storage1

In [ ]:

!unzip "/content/drive/My Drive/Sharing_storage1/darknet.zip"

Compile darknet directory with below script:

In [ ]:

%cd /content/darknet 
!make
!chmod +x ./darknet

Save weight during training in your Google Drive

This step is important since COLAB environment will be recycle after 12 hours and all files locate in its working space will be deleted. Here we defines a symbolic link to save the weight directly into our backup folder which we created in our GDrive before. In my case, my backup folder directory is My Drive/YOLOv3_weight/backup.

In [ ]:

!rm /content/darknet/backup -r
!ln -s /content/drive/'My Drive'/YOLOv3_weight/backup /content/darknet

Install dos2unix to convert train.txt, val.txt, yolo.data, yolo.names, yolov3_custom.cfg to unix

In [ ]:

!sudo apt install dos2unix

In [ ]:

!dos2unix ./data/train.txt
!dos2unix ./data/val.txt
!dos2unix ./data/yolo.data
!dos2unix ./data/yolo.names
!dos2unix ./cfg/yolov3_custom.cfg

Finally, let’s train our model

In [ ]:

%cd /content/darknet
!./darknet detector train data/yolo.data cfg/yolov3_custom.cfg darknet53.conv.74

Retrain yolov3 model with saved weight

In case the the your COLAB server is recycled and you do not want to train your model from beginning, you need to execute all the above steps again but this time, change darknet53.conv.74 to directory of your saved weight. For example, if I already had a weight files trained for 700 epochs, we can resume the training using script below:

!./darknet detector train data/yolo.data cfg/yolov3_custom.cfg backup/yolov3_700.weights

If all went well, training should begin now.

The weights will be stored after every 1000 iterations in the backup directory. Expect to see some nan values during training. However, make sure that the avg loss is never nan

Evaluating performance

To evaluate the model’s performance over the test set, run either of the following commands:

./darknet detector map data/yolo.data cfg/yolov3_custom.cfg backup/yolo-obj_final.weights

./darknet detector recall data/yolo.data cfg/yolov3_custom.cfg backup/yolov3_final.weights

COLAB NOTE:

  • Connect COLAB with Google Drive:
  • The authorization code should be something like this:
  • Load dataset and darknet folder to COLAB working space by unzip file “darknet.zip”
  • Use syntax !unzip + <path_to_darknet.zip_in_your_GDrive> to extract folder darknet.
  • Save weight during training in your Google Drive

What is the learning rate ?

The amount that the weights are updated during training is referred to as the step size or the “learning rate.” Specifically, the learning rate is a configurable hyperparameter used in the training of neural networks that has a small positive value, often in the range between 0.0 and 1.0

What is loss value?

Loss value implies how well or poorly a certain model behaves after each iteration of optimization. Ideally, one would expect the reduction of loss after each, or several, iteration(s). The accuracy of a model is usually determined after the model parameters are learned and fixed and no learning is taking place.

PROBLEM

i faced this problem, when I was training my model but the google colab keeps disconnecting after 3 hours automatically if I do not respond. And my data is lost.

SOLUTION

So to prevent this just run the following code in the console and it will prevent you from disconnecting.

Ctrl+ Shift + i to open inspector view . Then goto console.

function ClickConnect(){
console.log("Working");
document.querySelector("colab-toolbar-button#connect").click()
}setInterval(ClickConnect,60000)

It would keep on clicking the page and prevent it from disconnecting.

function ClickConnect(){
console.log(“Working”);
document.querySelector(“colab-toolbar-button”).click()
}setInterval(ClickConnect,60000)

  1. Right-click on the connect button (on the top-right side of the colab)
  2. Click on inspect
  3. Get the HTML id of the button and substitute in the following code

here is my Github Link, if you have any doubts go through that

Resources

Github repo of darknet

YOLOv3 Paper

Yolov3 Homepage

--

--

madan
madan

No responses yet