0%

TriDet

The detail of TriDet.


Intro

  1. TriDet: Temporal Action Detection with Relative Boundary Modeling.
  2. Temporal Action Detection(TAD)

    1. Detect all action boundaries and categories from an untrimmed video.
    2. The pipeline of TAD:
      1. backbone:
        1. use the pre-trained model in Action Recognition Task.
        2. to get the feature map of each frame.
  3. The core Focus of the author:

    1. get more accuracy boundary.
    2. explore Transformer for TAD.

Related Work

Intro

  1. two classes by the method of boundary dividing for TAD works using Transformer:
    1. Segment-level prediction
    2. Instance-level prediction

Segment-level prediction

  1. based on extracted feature maps, get a clip, and simple, global expression(e.g., pooling), finally judge whether the clip is the target.
  2. e.g. both of the below are two-stage network(like Faster-Rcnn, one stage generate lots of proposals, the second stage regresses and classifies proposals):
    1. BMN
    2. PGCN:
      1. use GCN to refine every proposal.
  3. The method can’t be trained end-to-end.
  4. End-to-end:
    1. TadTR
    2. ReAct

Instance-level prediction

Anchor-free Detection

AFSD

  1. predict the distance to the start or end boundary.
  2. then make the position pointed by most other position as the boundary.

ActionFormer

  1. Using slide-window to apply self-attention.
  2. In 2022, the work in TAD improved obviously.
  3. So TriDet carry out based on the work.

Segmentation

MLAD

  1. Apply self-attention in the time level dimension and the classes level dimension.
  2. Add the two attentions of the two level dimensions to build the final feature map.

MS-TCT

  1. Add a CNN module after a traditional self-attention module.
  2. Add residual connections.

Summary

  1. For the above two works, they both pay attention to modifying the self-attention.
  2. So, it indicates that the original self-attention cannot be applied to TAD.

Pros and Cons

Segment-level prediction

  1. Contain global representation of segments.
  2. Larger receptive field.
  3. More information.
  4. Detailed Information at each instant is discarded.
  5. Highly depend on the accuracy of segments.

Instant-level prediction

  1. Contain detailed representation of instants.
  2. Smaller receptive field.
  3. The requirement for feature distinguishability is high. (use the strong backbone to extract features.)
  4. The degree of response varies greatly with different videos.

Motivation of Trident-head

  1. Consider both instant-level and segment-level feature.
  2. Set it as the segment-level feature that the predicted frame with the fixed number of adjacent frames.
  3. Set the segment-level feature as instant-level feature?

Trident-head

  1. Three branch:
    1. Start Boundary and End Boundary would extract the segment-level feature.
    2. Center Offset would extract instance-level feature.
  2. E.G., Predicted Start Boundary is decided by Start Boundary and Center Offset:
  3. Expectation is decided by B:
    1. if B is too small, we can’t find the more far boundary.
    2. if B is too big, the difficulty of learning and convergence of training is more great, so that the predicted result is not accuracy.
  4. Combined with FPN:
    1. In the different level layers, the fixed number of B is set to product different Bs, so it can have small and big Bs simultaneously.
    2. While finally outputting, the predicted results in different layers times corresponding scale ratio to get real position.

The second question: Attention in Temporal Dimension.

  1. Many methods require complex attention mechanisms to make the network work.
  2. The success of the previous transformer-based layers(in TAD) primarily relies on their macro-architecture, rather than the self-attention mechanism.
  3. Above, when 1D-Conv take the place of Self-attention, the Avg map only drops by 1.9, but when CNN baseline takes the place of Transformer baseline, the Avg map drops very much, which indicates that Transformer is effective depending to its structure not Self-attention.
  4. The Rank Loss Problem of Self Attention

    In TAD, making the features same is disastrous, because we need to distinguish a position is the action or not.

  5. Pure LayerNorm will normalize the features $x\in R^n$ to a modulus $\sqrt{n}$ :
    $x’ = LayerNorm(x)$

    $x’_i = \frac{x_i-mean(x)}{\frac{1}{n} {\textstyle \sum_{n}^{}(x_i-mean(x))^2}}$

    $\left | x’ \right | ^2_2 = {\textstyle \sum_{n}^{}x’^2_i}=n$

  6. The Evidence on HACS:
    we consider the cosine similarity:

    1. here SA is Self-Attention, SGP is proposed by the author, BackB is the backbone network to extract the feature.
    2. here the value is the average of the cosine similarity of every feature and average feature in the same layer ?
  7. Consider the Self Attention:
    $V’=WV$
    $W=Softmax(\frac{QK^T}{\sqrt{d}})$
    W is non-negative and the sum of each row is 1, thus the $V’$ are [[conceptAI#convex combination|convex combination]] for the input $V$.
    But the value in Convolution kernel can be negative, and the sum can be not 1.

The author’s Solution: The SGP layer

  1. increase the discrimination of feature.
  2. capture temporal information with different scales of receptive fields.
    $f_{SGP}=\Phi (x)FC(x)+\psi(x)(Conv_w(x)+Conv_{kw}(x))+x$,

    $\Phi(x)=ReLU(FC(AvgPool(x)))$,

    $\psi(x)=Conv_w(x)$

    Window-level: make the network extract features in different scales adaptively.

    In detail:

    1. the author uses the depth-wise convolution to reduce the computation of the network.
    2. add a additional residual connection.

Experiment

The performance is so good: the accuracy is higher, the speed is faster.

Refs

  1. https://www.bilibili.com/video/BV12M4y117GZ/?spm_id_from=333.337.search-card.all.click&vd_source=2c23be48ba22c91130ce4868020ab598 (‘4.10)
  2. Paper: https://arxiv.org/abs/2303.07347
  3. Code: https://github.com/dingfengshi/TriDet