Classification

Tator is a web-based media management and curation project. Part of the media management is executing algorithms or workflows on a set of media. OpenEM is able to be run within the confines of a Tator workflow. Currently Retinanet-based Detection is supported for inference within a workflow.

Using the Reference Classification Workflow

The reference workflow can be used by modifying the scripts/tator/classification_workflow.yaml to match those of the given project.

Generating a data image

The reference workflow at run-time pulls a docker image containing network coefficients and weights. To generate a weights image, one can use the scripts/make_classification_image.py in a manner similar to below:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
python3 make_classification_image.py [-h] [--publish] [--image-tag IMAGE_TAG] models [models ...]

positional arguments:
   models                One or more models to incorporate into image.

optional arguments:
   -h, --help            show this help message and exit
   --publish             If supplied pushes to image repo
   --image-tag IMAGE_TAG
                         Name of image to build/publish

Using the reference workflow definition

A reference workflow yaml is in the repository which can be modified to indicate project-specific requirements. Arguments in the tator section refer to tator-level semantics such as the track_type_id to acquire thumbnails from and the attribute name to use, to output predictions label_attribute.

Options in the ensemble_config section map to the arguments and defaults used to initialize openem2.Classifier.thumbnail_classifier.EnsembleClassifier

Options to track_params section map to the arguments and defaults to the process_track_results function of the instantiated EnsembleClassifier.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: classifier-example
spec:
  entrypoint: pipeline
  ttlStrategy:
    SecondsAfterSuccess: 600
    SecondsAfterFailure: 86400
  volumes:
  - name: dockersock
    hostPath:
      path: /var/run/docker.sock
  templates:
  - name: pipeline
    steps:
    - - name: worker
        template: worker
  - name: worker
    inputs:
      artifacts:
      - name: strategy
        path: /work/strategy.yaml
        raw:
          data: |
            tator:
              track_type_id: 30
              label_attribute: Predicted
            ensemble_config:
              classNames:
                - Commercial
                - Recreational
              batchSize: 16
            track_params:
              high_entropy_name: Unknown
              entropy_cutoff: 0.40
            data_image: cvisionai/odfw_class_weights
    container:
      image: cvisionai/openem_lite2:experimental
      imagePullPolicy: Always
      resources:
        requests:
          cpu: 4000m
        limits:
          nvidia.com/gpu: 1
      env:
      - name: TATOR_MEDIA_IDS
        value: "{{workflow.parameters.media_ids}}"
      - name: TATOR_API_SERVICE
        value: "{{workflow.parameters.rest_url}}"
      - name: TATOR_AUTH_TOKEN
        value: "{{workflow.parameters.rest_token}}"
      - name: TATOR_PROJECT_ID
        value: "{{workflow.parameters.project_id}}"
      volumeMounts:
      - name: dockersock
        mountPath: /var/run/docker.sock
      command: [python3]
      args: ["-m", "openem2.classification.tator", "--strategy", "/data/strategy.yaml"]

Project setup

A project for using this workflow has a video type represented by a <media_type_id>. The project also has a localization box type represented by <box_type_id>. The project has a <track_type_id> that associates multiple localizations as the same physical object.

The <media_type_id>> has the following required attributes:

Track Classification Processed
A string attribute type that is set to the date time when the object detector finishes processing the media file.

The <track_type_id> requires the following attributes:

<label_attribute>
A string representing the name for an object class. If ‘Label’ is not an appropriate name for class, this can be customized via the label_attribute key in the strategy definition.
Entropy
This float attribute represents the uncertainty of the classification algorithm in its determination.