Pada contoh di atas dapat di lihat bahwa kita telah berhasil menjalankan program Python sederhana. Dan pada contoh kedua yang berupa fungsi, akan menghasilkan pesan error, karena seharusnya kita panggil dengan perkalian(a,b). Sehingga, semua fitur Python sama seperti jika kita menggunakan local komputer.
Selain itu ada beberapa setup yang perlu di perhatikan untuk menjalankan training Tensorflow model
Di sini jangan sampai terlewat, karena kita diberikan pilihan tipe hardware accelerator. By default, pilihannya adalah None (CPU). Lalu, ada dua pilihan lain GPU(Graphical Processing Unit) dan TPU (Tensorflow Processing Unit).
Berdasarkan percobaan yang telah saya lakukan dengan menggunakan masing-masing type hardware accelerator, yang paling cepat perhitungannya adalah dengan menggunakan GPU pada konfigurasi perhitungan training tensorflow model yang sama. Oleh karena itu, pilihlah yang ini. Namun, jika ingin mencobanya sendiri Anda pun dapat melakukan experiment.
Sekian artikel kali ini , semoga bermanfaat untuk Anda.
Upss.. hanya bercanda 🙂 , kita belum membahas mengenai isi topik sesungguhnya.
Pada bagian ini kita akan memanfaatkan Google Colab untuk training mendeteksi objek merk minuman ringan yang berupa gambar, namun Anda dapat juga menggunakannya untuk mendeteksi melalui camera/webcam berupa video.
Sebelum kita menggunakan fasilitasi dari Google Colab. Kita akan melakukan beberapa persiapan yang dibutuhkan agar dapat berfungsi sesuai dengan keinginan kita, diantaranya:
# tensorflow object detection colabs
"""
Created on Fri Oct 4 14:26:49 2019
@author: Muhammad Zacky Asy'ari
"""
from PIL import Image
import os, sys
path = r"C:tensorflow1SpecimentBotol Kaleng dan PlastikTrain\"
dirs = os.listdir( path )
def resize():
i = 0
for item in dirs:
if os.path.isfile(path+item):
im = Image.open(path+item)
f, e = os.path.splitext(path+item)
imResize = im.resize((720,540), Image.ANTIALIAS)
imResize.save('Image_'+str(i)+'.jpg', 'JPEG', quality=90)
i=i+1
print("done image " + str(i))
resize()
Setelah proses resizing selesai, selanjutnya tempatkan gambar pada folder yang berbeda yaitu ./data/images/train
dan./data/images/test
Sebarkan jumlah gambar tersebut menjadi 80-20 %. Maksudnya jika Anda memiliki 100 gambar, tempatkan 80 pada folder train
dan 20 pada folder test.
Selanjutnya adalah hal yang paling membosankan dan memakan waktu. Namun, Anda harus melakukannya, yaitu membuat anotasi dari setiap gambar yang telah kita resize seluruhnya.
Penjelasan detail dapat Anda baca pada artikel saya sebelumnya pada bagian labelling image.# tensorflow object detection colabs
# Jika Anda melakukan Fork, ganti link di bawah ini sesuai dengan link github Anda.
repo_url = 'https://github.com/zacky131/object_detection_demo'
# Jumlah training step.
num_steps = 20000 # 200000
# Jumlah evaluation step.
num_eval_steps = 50
# Jumlah sample di dalam folder "test".
num_examples = 45
MODELS_CONFIG = {
'ssd_mobilenet_v2': {
'model_name': 'ssd_mobilenet_v2_coco_2018_03_29',
'pipeline_file': 'ssd_mobilenet_v2_coco.config',
'batch_size': 12
},
'faster_rcnn_inception_v2': {
'model_name': 'faster_rcnn_inception_v2_coco_2018_01_28',
'pipeline_file': 'faster_rcnn_inception_v2_pets.config',
'batch_size': 1
},
'rfcn_resnet101': {
'model_name': 'rfcn_resnet101_coco_2018_01_28',
'pipeline_file': 'rfcn_resnet101_pets.config',
'batch_size': 8
},
'ssd_mobilenet_small_v3': {
'model_name': 'ssd_mobilenet_v3_small_coco_2019_08_14',
'pipeline_file': 'ssdlite_mobilenet_v3_small_320x320_coco.config',
'batch_size': 12
},
'ssd_inception_v2_coco': {
'model_name': 'ssd_inception_v2_coco_2018_01_28',
'pipeline_file': 'ssd_inception_v2_coco.config',
'batch_size': 12
},
'ssd_mobilenet_large_v3': {
'model_name': 'ssd_mobilenet_v3_large_coco_2019_08_14',
'pipeline_file': 'ssdlite_mobilenet_v3_large_320x320_coco.config',
'batch_size': 512
}
}
# Pilih model yang akan Anda gunakan
# Pilih model di dalam `MODELS_CONFIG`.
selected_model = 'faster_rcnn_inception_v2'
# Nama objek detection model yang digunakan.
MODEL = MODELS_CONFIG[selected_model]['model_name']
# NAma file pipeline pada Tensorflow object detection API.
pipeline_file = MODELS_CONFIG[selected_model]['pipeline_file']
# Training batch size fit di dalam Colab Tesla K80 GPU memory untuk model yang di pilih.
batch_size = MODELS_CONFIG[selected_model]['batch_size']
# tensorflow object detection colabs
import os
%cd /content
repo_dir_path = os.path.abspath(os.path.join('.', os.path.basename(repo_url)))
!git clone {repo_url}
%cd {repo_dir_path}
!git pull
# tensorflow object detection colabs
%cd /content
!git clone --quiet https://github.com/tensorflow/models.git
!apt-get install -qq protobuf-compiler python-pil python-lxml python-tk
!pip install -q Cython contextlib2 pillow lxml matplotlib
!pip install -q pycocotools
%cd /content/models/research
!protoc object_detection/protos/*.proto --python_out=.
import os
os.environ['PYTHONPATH'] += ':/content/models/research/:/content/models/research/slim/'
!python object_detection/builders/model_builder_test.py
# tensorflow object detection colabs
%cd {repo_dir_path}
# Konversikan anotasi pada folder train yang berupa file xml ke dalam satu csv file,
# Buat file `label_map.pbtxt` kepada folder `data/`.
!python xml_to_csv.py -i data/images/train -o data/annotations/train_labels.csv -l data/annotations
# Konversikan anotasi pada folder test yang berupa file xml ke dalam satu csv file.
!python xml_to_csv.py -i data/images/test -o data/annotations/test_labels.csv
# Buat file `train.record`
!python generate_tfrecord.py --csv_input=data/annotations/train_labels.csv --output_path=data/annotations/train.record --img_path=data/images/train --label_map data/annotations/label_map.pbtxt
# Buat file `test.record`
!python generate_tfrecord.py --csv_input=data/annotations/test_labels.csv --output_path=data/annotations/test.record --img_path=data/images/test --label_map data/annotations/label_map.pbtxt
# tensorflow object detection colabs
test_record_fname = '/content/object_detection_demo/data/annotations/test.record'
train_record_fname = '/content/object_detection_demo/data/annotations/train.record'
label_map_pbtxt_fname = '/content/object_detection_demo/data/annotations/label_map.pbtxt'
# tensorflow object detection colabs
%cd /content/models/research
import os
import shutil
import glob
import urllib.request
import tarfile
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
DEST_DIR = '/content/models/research/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urllib.request.urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
# tensorflow object detection colabs auftechnique.com
import os
pipeline_fname = os.path.join('/content/models/research/object_detection/samples/configs/', pipeline_file)
assert os.path.isfile(pipeline_fname), '`{}` not exist'.format(pipeline_fname)
# tensorflow object detection colabs auftechnique.com
def get_num_classes(pbtxt_fname):
from object_detection.utils import label_map_util
label_map = label_map_util.load_labelmap(pbtxt_fname)
categories = label_map_util.convert_label_map_to_categories(
label_map, max_num_classes=90, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
return len(category_index.keys())
# tensorflow object detection colabs auftechnique.com
import re
num_classes = get_num_classes(label_map_pbtxt_fname)
with open(pipeline_fname) as f:
s = f.read()
with open(pipeline_fname, 'w') as f:
# fine_tune_checkpoint
s = re.sub('fine_tune_checkpoint: ".*?"',
'fine_tune_checkpoint: "{}"'.format(fine_tune_checkpoint), s)
# tfrecord file train dan test.
s = re.sub(
'(input_path: ".*?)(train.record)(.*?")', 'input_path: "{}"'.format(train_record_fname), s)
s = re.sub(
'(input_path: ".*?)(val.record)(.*?")', 'input_path: "{}"'.format(test_record_fname), s)
# label_map_path
s = re.sub(
'label_map_path: ".*?"', 'label_map_path: "{}"'.format(label_map_pbtxt_fname), s)
# Set training batch_size.
s = re.sub('batch_size: [0-9]+',
'batch_size: {}'.format(batch_size), s)
# Set training steps, num_steps
s = re.sub('num_steps: [0-9]+',
'num_steps: {}'.format(num_steps), s)
# Set jumlah classes num_classes.
s = re.sub('num_classes: [0-9]+',
'num_classes: {}'.format(num_classes), s)
# Set jumlah contoh (jumlah gambar).
s = re.sub('num_examples: [0-9]+',
'num_examples: {}'.format(num_examples), s)
f.write(s)
# tensorflow object detection colabs auftechnique.com
!cat {pipeline_fname}
# tensorflow object detection colabs auftechnique.com
model_dir = 'training/'
# Menghapus output konten sebelumnya agar mulai dari fresh kembali (Optional)
!rm -rf {model_dir}
os.makedirs(model_dir, exist_ok=True)
# tensorflow object detection colabs auftechnique.com
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip -o ngrok-stable-linux-amd64.zip
# tensorflow object detection colabs auftechnique.com
LOG_DIR = model_dir
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
# tensorflow object detection colabs auftechnique.com
get_ipython().system_raw('./ngrok http 6006 &')
Pada awalnya tampilan Tensorboard akan kosong, karena memang kita belum menjalankan training model tersebut. Namun, setelah proses training di mulai, grafik akan mulai terlihat.
# tensorflow object detection colabs auftechnique.com
! curl -s http://localhost:4040/api/tunnels | python3 -c
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
# tensorflow object detection colabs auftechnique.com
!python /content/models/research/object_detection/model_main.py
--pipeline_config_path={pipeline_fname}
--model_dir={model_dir}
--alsologtostderr
--num_train_steps={num_steps}
--num_eval_steps={num_eval_steps}
# tensorflow object detection colabs auftechnique.com
!ls {model_dir}
# Cara klasik untuk training(dapat juga di lakukan).
# !python /content/models/research/object_detection/legacy/train.py --logtostderr --train_dir={model_dir} --pipeline_config_path={pipeline_fname}
# tensorflow object detection colabs auftechnique.com
import re
import numpy as np
output_directory = './fine_tuned_model'
lst = os.listdir(model_dir)
lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l]
steps=np.array([int(re.findall('d+', l)[0]) for l in lst])
last_model = lst[steps.argmax()].replace('.meta', '')
last_model_path = os.path.join(model_dir, last_model)
print(last_model_path)
!python /content/models/research/object_detection/export_inference_graph.py
--input_type=image_tensor
--pipeline_config_path={pipeline_fname}
--output_directory={output_directory}
--trained_checkpoint_prefix={last_model_path}
# training tensorflow model auftechnique.com
!ls {output_directory}
# training tensorflow model auftechnique.com
import os
pb_fname = os.path.join(os.path.abspath(output_directory), "frozen_inference_graph.pb")
assert os.path.isfile(pb_fname), '`{}` not exist'.format(pb_fname)
!ls -alh {pb_fname}
# training tensorflow model auftechnique.com
# Install PyDrivce wrapper dan import beberapa library.
# Hanya perlu dilakukan sekali pada notebook.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Auntetifikasikan dan buat PyDrive client.
# Hanya perlu dilakukan sekali pada notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
fname = os.path.basename(pb_fname)
# Buat dan uoload text file.
uploaded = drive.CreateFile({'title': fname})
uploaded.SetContentFile(pb_fname)
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
# training tensorflow model auftechnique.com
from google.colab import files
files.download(pb_fname)
# training tensorflow model auftechnique.com
from google.colab import files
files.download(label_map_pbtxt_fname)
# training tensorflow model auftechnique.com
files.download(pipeline_fname)
# training tensorflow model auftechnique.com
import os
import glob
# Path kepada frozen detection graph. Ini adalah model aktual yang digunakan untuk mendeteksi model.
PATH_TO_CKPT = pb_fname
# List dari string yang digunakan untuk menggunakan label dengan benar pada setiap box.
PATH_TO_LABELS = label_map_pbtxt_fname
# Jika Anda ingin mengetest kode dengan gambar yang Anda sediakan. Anda tinggal menambahkan file gambar kepada PATH_TO_TEST_IMAGES_DIR.
PATH_TO_TEST_IMAGES_DIR = os.path.join(repo_dir_path, "test")
assert os.path.isfile(pb_fname)
assert os.path.isfile(PATH_TO_LABELS)
TEST_IMAGE_PATHS = glob.glob(os.path.join(PATH_TO_TEST_IMAGES_DIR, "*.*"))
assert len(TEST_IMAGE_PATHS) > 0, 'Gambar tidak ditemukan pada `{}`.'.format(PATH_TO_TEST_IMAGES_DIR)
print(TEST_IMAGE_PATHS)
Script di bawah ini untuk menampilkan hasil deteksi pada virtual machine google colab.
# training tensorflow model auftechnique.com
%cd /content/models/research/object_detection
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# Kode di bawah ini diperlukan karena notebook di simpan pada folder object_detection.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
# Kode di bawah untuk menampilkan gambar.
%matplotlib inline
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(
label_map, max_num_classes=num_classes, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# Ukuran, dalam inch, output dari gambar.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
#
Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {
output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# Di bawah ini untuk memproses satu gambar
detection_boxes = tf.squeeze(
tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(
tensor_dict['detection_masks'], [0])
# Perlu di lakukan frame ulang untuk menerjemahkan mask dari box koordinat ke dalam koordinat gambar dan sesuai dengan ukuran gambar.
real_num_detection = tf.cast(
tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [
real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [
real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Ikuti konvensi dengan menambahkan ukuran batch
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Jalankan inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# Seluruh output adalah float32 numpy array, sehingga konversikan yang sesuai.
output_dict['num_detections'] = int(
output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# Gambar dalam bentuk array akan digunakan untuk mempersiapkan
# Hasil akhir gambar termasuk box dan label di dalamnya
image_np = load_image_into_numpy_array(image)
# Ekspansi ukuran, karena gambar ekspektasi nya memiliki ukuran [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Deteksi aktual
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualisasi dari hasil deteksi.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
VOILA!!!
Kita telah berhasil mendeteksi objek dengan memanfaatkan Google Colab Virtual Machine.
Jika gambar di atas tidak muncul. Anda dapat menjalankan ulang perintah terakhir setelah menunggu beberapa menit.