Deployment API

As of version 0.1.3, the deployment library is a python module. The documentation of the module can be found below.

Installing

In the deploy_python folder, pip3 install . can be used to install the package.

Package openem

OpenEM Inference Deployment Package

Base objects

Define base classes for openem models

class openem.models.ImageModel(model_path, image_dims=None, gpu_fraction=1.0, input_name='input_1:0', output_name='output_node0:0', optimize=True, optimizer_args=None, batch_size=1, cpu_only=False)[source]

Bases: object

Base class for serving image-related models from tensorflow

Initialize an image model object model_path : str or path-like object

Path to the frozen protobuf of the tensorflow graph
image_dims : tuple
Tuple for image dims: (<height>, <width>, <channels>) If None, is inferred from the graph.
gpu_fraction : float
Fraction of GPU allowed to be used by this object.
input_name : str
Name of the tensor that serves as the image input
output_name : str or list of str
Name(s) of the the tensor that serves as the output of the network. If a singular tensor is given; then the process function will return that singular tensor. Else the process function returns each tensor output in the order specified in this function as a list.
batch_size : int
Maximum number of images to process as a batch
cpu_only: bool
If true will only use CPU for inference
inputShape()[source]

Returns the shape of the input image for this network

process(batch_size=None)[source]

Process the current batch of image(s).

Returns None if there are no images.

Find Ruler

Module for finding ruler masks in raw images

class openem.FindRuler.RulerMaskFinder(model_path, image_dims=None, **kwargs)[source]

Bases: openem.models.ImageModel

Class for finding ruler masks from raw images

addImage(image)[source]
Add an image to process in the underlying ImageModel after
running preprocessing on it specific to this model.
image: np.ndarray the underlying image (not pre-processed) to add
to the model’s current batch
process(postprocess=True)[source]
Runs the base ImageModel and does a high-pass filter only allowing
matches greater than 127 to make it into the resultant mask

Returns the mask of the ruler in the size of the network image, the user must resize to input image if different.

openem.FindRuler.findRoi(image_mask, h_margin)[source]

Returns the roi of a given mask; with additional padding added both horizontally and vertically based off of h_margin and the underlying aspect ratio. image_mask: array

Represents image mask
h_margin: int
Number of pixels to use
openem.FindRuler.rectify(image, endpoints)[source]

Rectifies an image such that the ruler(in endpoints) is flat image: array

Represents an image or image mask
endpoints: array
Represents 2 pair of endpoints for a ruler
openem.FindRuler.rulerEndpoints(image_mask)[source]

Find the ruler end points given an image mask image_mask: 8-bit single channel image_mask

openem.FindRuler.rulerPresent(image_mask)[source]

Returns true if a ruler is present in the frame

Detection

Detection Results

class openem.Detect.Detection(location, confidence, species, frame, video_id)

Create new instance of Detection(location, confidence, species, frame, video_id)

confidence

Alias for field number 1

frame

Alias for field number 3

location

Alias for field number 0

species

Alias for field number 2

video_id

Alias for field number 4

Single Shot Detector

class openem.Detect.SSD.SSDDetector(model_path, image_dims=None, gpu_fraction=1.0, input_name='input_1:0', output_name='output_node0:0', optimize=True, optimizer_args=None, batch_size=1, cpu_only=False)[source]

Bases: openem.models.ImageModel

Initialize an image model object model_path : str or path-like object

Path to the frozen protobuf of the tensorflow graph
image_dims : tuple
Tuple for image dims: (<height>, <width>, <channels>) If None, is inferred from the graph.
gpu_fraction : float
Fraction of GPU allowed to be used by this object.
input_name : str
Name of the tensor that serves as the image input
output_name : str or list of str
Name(s) of the the tensor that serves as the output of the network. If a singular tensor is given; then the process function will return that singular tensor. Else the process function returns each tensor output in the order specified in this function as a list.
batch_size : int
Maximum number of images to process as a batch
cpu_only: bool
If true will only use CPU for inference
addImage(image, cookie=None)[source]

Add an image to process in the underlying ImageModel after running preprocessing on it specific to this model.

image: np.array of the underlying image (not pre-processed) to
add to the model’s current batch.
process()[source]
Runs network to find fish in batched images by performing object
detection with a Single Shot Detector (SSD).

Returns a list of Detection (or None if batch is empty)

openem.Detect.SSD.decodeBoxes(loc, anchors, variances, img_size)[source]

Decodes bounding box from network output

loc: Bounding box parameters one box per element anchors: Anchors box parameters, one box per element variances: Variances per box

Returns a Nx4 matrix of bounding boxes

Retinanet Detector

RetinaNet Object Detector for OpenEM

class openem.Detect.RetinaNet.RetinaNetDetector(modelPath, meanImage=None, gpuFraction=1.0, imageShape=(360, 720), **kwargs)[source]

Bases: openem.models.ImageModel

Initialize the RetinaNet Detector model modelPath: str

path-like object to frozen pb graph
meanImage: np.array
Mean image subtracted from image prior to network insertion. Can be None.
image_shape: tuple
(height, width) of the image to feed into the detector network.
class openem.Detect.RetinaNet.RetinaNetPreprocessor(meanImage=None)[source]

Bases: object

Perform preprocessinig for RetinaNet inputs Meets the callable interface of openem.Detect.Preprocessor

Classification

Module for performing classification of a detection

class openem.Classify.Classification(species, cover, frame, video_id)

Bases: tuple

Create new instance of Classification(species, cover, frame, video_id)

cover

Alias for field number 1

frame

Alias for field number 2

species

Alias for field number 0

video_id

Alias for field number 3

class openem.Classify.Classifier(model_path, gpu_fraction=1.0, **kwargs)[source]

Bases: openem.models.ImageModel

Initialize an image model object model_path : str or path-like object

Path to the frozen protobuf of the tensorflow graph
gpu_fraction : float
Fraction of GPU allowed to be used by this object.
addImage(image, cookie=None)[source]
Add an image to process in the underlying ImageModel after
running preprocessing on it specific to this model.
image: np.ndarray the underlying image (not pre-processed) to add
to the model’s current batch
process()[source]

Process the current batch of image(s).

Returns None if there are no images.

Count

Module for finding keyframes

class openem.Count.KeyframeFinder(model_path, img_width, img_height, gpu_fraction=1.0)[source]

Bases: object

Model to find keyframes of a given species

Initialize a keyframe finder model. Gives a list of keyframes for

each species. Caveats of this model:

  • Assumes tracking 1 classification/detection per frame
model_path : str or path-like object
Path to the frozen protobuf of the tensorflow graph

img_width: Width of the image input to detector (pixels) img_height: Height of image input to decttor (pixels) gpu_fraction : float

Fraction of GPU allowed to be used by this object.
process(classifications, detections)[source]

Process the list of classifications and detections, which must be the same length.

The outer dimension in each parameter is a frame; and the inner a list of detection or classification in a given frame

classifications: list of list of openem.Classify.Classfication detections: list of list of openem.Detect.Detection

sequenceSize()[source]

Returns the effective number of frames one can process in an individual sequence