mmdet.apis¶
mmdet.core¶
anchor¶
- class mmdet.core.anchor.AnchorGenerator(strides, ratios, scales=None, base_sizes=None, scale_major=True, octave_base_scale=None, scales_per_octave=None, centers=None, center_offset=0.0)[source]¶
Standard anchor generator for 2D anchor-based detectors.
- Parameters
strides (list[int] | list[tuple[int, int]]) – Strides of anchors in multiple feature levels in order (w, h).
ratios (list[float]) – The list of ratios between the height and width of anchors in a single level.
scales (list[int] | None) – Anchor scales for anchors in a single level. It cannot be set at the same time if octave_base_scale and scales_per_octave are set.
base_sizes (list[int] | None) – The basic sizes of anchors in multiple levels. If None is given, strides will be used as base_sizes. (If strides are non square, the shortest stride is taken.)
scale_major (bool) – Whether to multiply scales first when generating base anchors. If true, the anchors in the same row will have the same scales. By default it is True in V2.0
octave_base_scale (int) – The base scale of octave.
scales_per_octave (int) – Number of scales for each octave. octave_base_scale and scales_per_octave are usually used in retinanet and the scales should be None when they are set.
centers (list[tuple[float, float]] | None) – The centers of the anchor relative to the feature grid center in multiple feature levels. By default it is set to be None and not used. If a list of tuple of float is given, they will be used to shift the centers of anchors.
center_offset (float) – The offset of center in proportion to anchors’ width and height. By default it is 0 in V2.0.
Examples
>>> from mmdet.core import AnchorGenerator >>> self = AnchorGenerator([16], [1.], [1.], [9]) >>> all_anchors = self.grid_priors([(2, 2)], device='cpu') >>> print(all_anchors) [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], [11.5000, -4.5000, 20.5000, 4.5000], [-4.5000, 11.5000, 4.5000, 20.5000], [11.5000, 11.5000, 20.5000, 20.5000]])] >>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18]) >>> all_anchors = self.grid_priors([(2, 2), (1, 1)], device='cpu') >>> print(all_anchors) [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], [11.5000, -4.5000, 20.5000, 4.5000], [-4.5000, 11.5000, 4.5000, 20.5000], [11.5000, 11.5000, 20.5000, 20.5000]]), tensor([[-9., -9., 9., 9.]])]
- gen_base_anchors()[source]¶
Generate base anchors.
- Returns
Base anchors of a feature grid in multiple feature levels.
- Return type
list(torch.Tensor)
- gen_single_level_base_anchors(base_size, scales, ratios, center=None)[source]¶
Generate base anchors of a single level.
- Parameters
base_size (int | float) – Basic size of an anchor.
scales (torch.Tensor) – Scales of the anchor.
ratios (torch.Tensor) – The ratio between between the height and width of anchors in a single level.
center (tuple[float], optional) – The center of the base anchor related to a single feature grid. Defaults to None.
- Returns
Anchors in a single-level feature maps.
- Return type
torch.Tensor
- grid_anchors(featmap_sizes, device='cuda')[source]¶
Generate grid anchors in multiple feature levels.
- Parameters
featmap_sizes (list[tuple]) – List of feature map sizes in multiple feature levels.
device (str) – Device where the anchors will be put on.
- Returns
Anchors in multiple feature levels. The sizes of each tensor should be [N, 4], where N = width * height * num_base_anchors, width and height are the sizes of the corresponding feature level, num_base_anchors is the number of anchors for that level.
- Return type
list[torch.Tensor]
- grid_priors(featmap_sizes, dtype=torch.float32, device='cuda')[source]¶
Generate grid anchors in multiple feature levels.
- Parameters
featmap_sizes (list[tuple]) – List of feature map sizes in multiple feature levels.
dtype (
torch.dtype
) – Dtype of priors. Default: torch.float32.device (str) – The device where the anchors will be put on.
- Returns
Anchors in multiple feature levels. The sizes of each tensor should be [N, 4], where N = width * height * num_base_anchors, width and height are the sizes of the corresponding feature level, num_base_anchors is the number of anchors for that level.
- Return type
list[torch.Tensor]
- property num_base_anchors¶
total number of base anchors in a feature grid
- Type
list[int]
- property num_base_priors¶
The number of priors (anchors) at a point on the feature grid
- Type
list[int]
- property num_levels¶
number of feature levels that the generator will be applied
- Type
int
- single_level_grid_anchors(base_anchors, featmap_size, stride=(16, 16), device='cuda')[source]¶
Generate grid anchors of a single level.
Note
This function is usually called by method
self.grid_anchors
.- Parameters
base_anchors (torch.Tensor) – The base anchors of a feature grid.
featmap_size (tuple[int]) – Size of the feature maps.
stride (tuple[int], optional) – Stride of the feature map in order (w, h). Defaults to (16, 16).
device (str, optional) – Device the tensor will be put on. Defaults to ‘cuda’.
- Returns
Anchors in the overall feature maps.
- Return type
torch.Tensor
- single_level_grid_priors(featmap_size, level_idx, dtype=torch.float32, device='cuda')[source]¶
Generate grid anchors of a single level.
Note
This function is usually called by method
self.grid_priors
.- Parameters
featmap_size (tuple[int]) – Size of the feature maps.
level_idx (int) – The index of corresponding feature map level.
(obj (dtype) – torch.dtype): Date type of points.Defaults to
torch.float32
.device (str, optional) – The device the tensor will be put on. Defaults to ‘cuda’.
- Returns
Anchors in the overall feature maps.
- Return type
torch.Tensor
- single_level_valid_flags(featmap_size, valid_size, num_base_anchors, device='cuda')[source]¶
Generate the valid flags of anchor in a single feature map.
- Parameters
featmap_size (tuple[int]) – The size of feature maps, arrange as (h, w).
valid_size (tuple[int]) – The valid size of the feature maps.
num_base_anchors (int) – The number of base anchors.
device (str, optional) – Device where the flags will be put on. Defaults to ‘cuda’.
- Returns
The valid flags of each anchor in a single level feature map.
- Return type
torch.Tensor
- sparse_priors(prior_idxs, featmap_size, level_idx, dtype=torch.float32, device='cuda')[source]¶
Generate sparse anchors according to the
prior_idxs
.- Parameters
prior_idxs (Tensor) – The index of corresponding anchors in the feature map.
featmap_size (tuple[int]) – feature map size arrange as (h, w).
level_idx (int) – The level index of corresponding feature map.
(obj (device) – torch.dtype): Date type of points.Defaults to
torch.float32
.(obj – torch.device): The device where the points is located.
- Returns
- Anchor with shape (N, 4), N should be equal to
the length of
prior_idxs
.
- Return type
Tensor
- valid_flags(featmap_sizes, pad_shape, device='cuda')[source]¶
Generate valid flags of anchors in multiple feature levels.
- Parameters
featmap_sizes (list(tuple)) – List of feature map sizes in multiple feature levels.
pad_shape (tuple) – The padded shape of the image.
device (str) – Device where the anchors will be put on.
- Returns
Valid flags of anchors in multiple levels.
- Return type
list(torch.Tensor)
- class mmdet.core.anchor.LegacyAnchorGenerator(strides, ratios, scales=None, base_sizes=None, scale_major=True, octave_base_scale=None, scales_per_octave=None, centers=None, center_offset=0.0)[source]¶
Legacy anchor generator used in MMDetection V1.x.
Note
Difference to the V2.0 anchor generator:
The center offset of V1.x anchors are set to be 0.5 rather than 0.
The width/height are minused by 1 when calculating the anchors’ centers and corners to meet the V1.x coordinate system.
The anchors’ corners are quantized.
- Parameters
strides (list[int] | list[tuple[int]]) – Strides of anchors in multiple feature levels.
ratios (list[float]) – The list of ratios between the height and width of anchors in a single level.
scales (list[int] | None) – Anchor scales for anchors in a single level. It cannot be set at the same time if octave_base_scale and scales_per_octave are set.
base_sizes (list[int]) – The basic sizes of anchors in multiple levels. If None is given, strides will be used to generate base_sizes.
scale_major (bool) – Whether to multiply scales first when generating base anchors. If true, the anchors in the same row will have the same scales. By default it is True in V2.0
octave_base_scale (int) – The base scale of octave.
scales_per_octave (int) – Number of scales for each octave. octave_base_scale and scales_per_octave are usually used in retinanet and the scales should be None when they are set.
centers (list[tuple[float, float]] | None) – The centers of the anchor relative to the feature grid center in multiple feature levels. By default it is set to be None and not used. It a list of float is given, this list will be used to shift the centers of anchors.
center_offset (float) – The offset of center in proportion to anchors’ width and height. By default it is 0.5 in V2.0 but it should be 0.5 in v1.x models.
Examples
>>> from mmdet.core import LegacyAnchorGenerator >>> self = LegacyAnchorGenerator( >>> [16], [1.], [1.], [9], center_offset=0.5) >>> all_anchors = self.grid_anchors(((2, 2),), device='cpu') >>> print(all_anchors) [tensor([[ 0., 0., 8., 8.], [16., 0., 24., 8.], [ 0., 16., 8., 24.], [16., 16., 24., 24.]])]
- gen_single_level_base_anchors(base_size, scales, ratios, center=None)[source]¶
Generate base anchors of a single level.
Note
The width/height of anchors are minused by 1 when calculating the centers and corners to meet the V1.x coordinate system.
- Parameters
base_size (int | float) – Basic size of an anchor.
scales (torch.Tensor) – Scales of the anchor.
ratios (torch.Tensor) – The ratio between between the height. and width of anchors in a single level.
center (tuple[float], optional) – The center of the base anchor related to a single feature grid. Defaults to None.
- Returns
Anchors in a single-level feature map.
- Return type
torch.Tensor
- class mmdet.core.anchor.MlvlPointGenerator(strides, offset=0.5)[source]¶
Standard points generator for multi-level (Mlvl) feature maps in 2D points-based detectors.
- Parameters
strides (list[int] | list[tuple[int, int]]) – Strides of anchors in multiple feature levels in order (w, h).
offset (float) – The offset of points, the value is normalized with corresponding stride. Defaults to 0.5.
- grid_priors(featmap_sizes, dtype=torch.float32, device='cuda', with_stride=False)[source]¶
Generate grid points of multiple feature levels.
- Parameters
featmap_sizes (list[tuple]) – List of feature map sizes in multiple feature levels, each size arrange as as (h, w).
dtype (
dtype
) – Dtype of priors. Default: torch.float32.device (str) – The device where the anchors will be put on.
with_stride (bool) – Whether to concatenate the stride to the last dimension of points.
- Returns
Points of multiple feature levels. The sizes of each tensor should be (N, 2) when with stride is
False
, where N = width * height, width and height are the sizes of the corresponding feature level, and the last dimension 2 represent (coord_x, coord_y), otherwise the shape should be (N, 4), and the last dimension 4 represent (coord_x, coord_y, stride_w, stride_h).- Return type
list[torch.Tensor]
- property num_base_priors¶
The number of priors (points) at a point on the feature grid
- Type
list[int]
- property num_levels¶
number of feature levels that the generator will be applied
- Type
int
- single_level_grid_priors(featmap_size, level_idx, dtype=torch.float32, device='cuda', with_stride=False)[source]¶
Generate grid Points of a single level.
Note
This function is usually called by method
self.grid_priors
.- Parameters
featmap_size (tuple[int]) – Size of the feature maps, arrange as (h, w).
level_idx (int) – The index of corresponding feature map level.
dtype (
dtype
) – Dtype of priors. Default: torch.float32.device (str, optional) – The device the tensor will be put on. Defaults to ‘cuda’.
with_stride (bool) – Concatenate the stride to the last dimension of points.
- Returns
Points of single feature levels. The shape of tensor should be (N, 2) when with stride is
False
, where N = width * height, width and height are the sizes of the corresponding feature level, and the last dimension 2 represent (coord_x, coord_y), otherwise the shape should be (N, 4), and the last dimension 4 represent (coord_x, coord_y, stride_w, stride_h).- Return type
Tensor
- single_level_valid_flags(featmap_size, valid_size, device='cuda')[source]¶
Generate the valid flags of points of a single feature map.
- Parameters
featmap_size (tuple[int]) – The size of feature maps, arrange as as (h, w).
valid_size (tuple[int]) – The valid size of the feature maps. The size arrange as as (h, w).
device (str, optional) – The device where the flags will be put on. Defaults to ‘cuda’.
- Returns
The valid flags of each points in a single level feature map.
- Return type
torch.Tensor
- sparse_priors(prior_idxs, featmap_size, level_idx, dtype=torch.float32, device='cuda')[source]¶
Generate sparse points according to the
prior_idxs
.- Parameters
prior_idxs (Tensor) – The index of corresponding anchors in the feature map.
featmap_size (tuple[int]) – feature map size arrange as (w, h).
level_idx (int) – The level index of corresponding feature map.
(obj (device) – torch.dtype): Date type of points. Defaults to
torch.float32
.(obj – torch.device): The device where the points is located.
- Returns
Anchor with shape (N, 2), N should be equal to the length of
prior_idxs
. And last dimension 2 represent (coord_x, coord_y).- Return type
Tensor
- valid_flags(featmap_sizes, pad_shape, device='cuda')[source]¶
Generate valid flags of points of multiple feature levels.
- Parameters
featmap_sizes (list(tuple)) – List of feature map sizes in multiple feature levels, each size arrange as as (h, w).
pad_shape (tuple(int)) – The padded shape of the image, arrange as (h, w).
device (str) – The device where the anchors will be put on.
- Returns
Valid flags of points of multiple levels.
- Return type
list(torch.Tensor)
- class mmdet.core.anchor.YOLOAnchorGenerator(strides, base_sizes)[source]¶
Anchor generator for YOLO.
- Parameters
strides (list[int] | list[tuple[int, int]]) – Strides of anchors in multiple feature levels.
base_sizes (list[list[tuple[int, int]]]) – The basic sizes of anchors in multiple levels.
- gen_base_anchors()[source]¶
Generate base anchors.
- Returns
Base anchors of a feature grid in multiple feature levels.
- Return type
list(torch.Tensor)
- gen_single_level_base_anchors(base_sizes_per_level, center=None)[source]¶
Generate base anchors of a single level.
- Parameters
base_sizes_per_level (list[tuple[int, int]]) – Basic sizes of anchors.
center (tuple[float], optional) – The center of the base anchor related to a single feature grid. Defaults to None.
- Returns
Anchors in a single-level feature maps.
- Return type
torch.Tensor
- property num_levels¶
number of feature levels that the generator will be applied
- Type
int
- responsible_flags(featmap_sizes, gt_bboxes, device='cuda')[source]¶
Generate responsible anchor flags of grid cells in multiple scales.
- Parameters
featmap_sizes (list(tuple)) – List of feature map sizes in multiple feature levels.
gt_bboxes (Tensor) – Ground truth boxes, shape (n, 4).
device (str) – Device where the anchors will be put on.
- Returns
responsible flags of anchors in multiple level
- Return type
list(torch.Tensor)
- single_level_responsible_flags(featmap_size, gt_bboxes, stride, num_base_anchors, device='cuda')[source]¶
Generate the responsible flags of anchor in a single feature map.
- Parameters
featmap_size (tuple[int]) – The size of feature maps.
gt_bboxes (Tensor) – Ground truth boxes, shape (n, 4).
stride (tuple(int)) – stride of current level
num_base_anchors (int) – The number of base anchors.
device (str, optional) – Device where the flags will be put on. Defaults to ‘cuda’.
- Returns
The valid flags of each anchor in a single level feature map.
- Return type
torch.Tensor
- mmdet.core.anchor.anchor_inside_flags(flat_anchors, valid_flags, img_shape, allowed_border=0)[source]¶
Check whether the anchors are inside the border.
- Parameters
flat_anchors (torch.Tensor) – Flatten anchors, shape (n, 4).
valid_flags (torch.Tensor) – An existing valid flags of anchors.
img_shape (tuple(int)) – Shape of current image.
allowed_border (int, optional) – The border to allow the valid anchor. Defaults to 0.
- Returns
Flags indicating whether the anchors are inside a valid range.
- Return type
torch.Tensor
- mmdet.core.anchor.calc_region(bbox, ratio, featmap_size=None)[source]¶
Calculate a proportional bbox region.
The bbox center are fixed and the new h’ and w’ is h * ratio and w * ratio.
- Parameters
bbox (Tensor) – Bboxes to calculate regions, shape (n, 4).
ratio (float) – Ratio of the output region.
featmap_size (tuple) – Feature map size used for clipping the boundary.
- Returns
x1, y1, x2, y2
- Return type
tuple
bbox¶
- class mmdet.core.bbox.AssignResult(num_gts, gt_inds, max_overlaps, labels=None)[source]¶
Stores assignments between predicted and truth boxes.
- num_gts¶
the number of truth boxes considered when computing this assignment
- Type
int
- gt_inds¶
for each predicted box indicates the 1-based index of the assigned truth box. 0 means unassigned and -1 means ignore.
- Type
LongTensor
- max_overlaps¶
the iou between the predicted box and its assigned truth box.
- Type
FloatTensor
- labels¶
If specified, for each predicted box indicates the category label of the assigned truth box.
- Type
None | LongTensor
Example
>>> # An assign result between 4 predicted boxes and 9 true boxes >>> # where only two boxes were assigned. >>> num_gts = 9 >>> max_overlaps = torch.LongTensor([0, .5, .9, 0]) >>> gt_inds = torch.LongTensor([-1, 1, 2, 0]) >>> labels = torch.LongTensor([0, 3, 4, 0]) >>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels) >>> print(str(self)) # xdoctest: +IGNORE_WANT <AssignResult(num_gts=9, gt_inds.shape=(4,), max_overlaps.shape=(4,), labels.shape=(4,))> >>> # Force addition of gt labels (when adding gt as proposals) >>> new_labels = torch.LongTensor([3, 4, 5]) >>> self.add_gt_(new_labels) >>> print(str(self)) # xdoctest: +IGNORE_WANT <AssignResult(num_gts=9, gt_inds.shape=(7,), max_overlaps.shape=(7,), labels.shape=(7,))>
- add_gt_(gt_labels)[source]¶
Add ground truth as assigned results.
- Parameters
gt_labels (torch.Tensor) – Labels of gt boxes
- property info¶
a dictionary of info about the object
- Type
dict
- property num_preds¶
the number of predictions in this assignment
- Type
int
- classmethod random(**kwargs)[source]¶
Create random AssignResult for tests or debugging.
- Parameters
num_preds – number of predicted boxes
num_gts – number of true boxes
p_ignore (float) – probability of a predicted box assigned to an ignored truth
p_assigned (float) – probability of a predicted box not being assigned
p_use_label (float | bool) – with labels or not
rng (None | int | numpy.random.RandomState) – seed or state
- Returns
Randomly generated assign results.
- Return type
Example
>>> from mmdet.core.bbox.assigners.assign_result import * # NOQA >>> self = AssignResult.random() >>> print(self.info)
- class mmdet.core.bbox.BaseBBoxCoder(**kwargs)[source]¶
Base bounding box coder.
- class mmdet.core.bbox.BaseSampler(num, pos_fraction, neg_pos_ub=- 1, add_gt_as_proposals=True, **kwargs)[source]¶
Base class of samplers.
- sample(assign_result, bboxes, gt_bboxes, gt_labels=None, **kwargs)[source]¶
Sample positive and negative bboxes.
This is a simple implementation of bbox sampling given candidates, assigning results and ground truth bboxes.
- Parameters
assign_result (
AssignResult
) – Bbox assigning results.bboxes (Tensor) – Boxes to be sampled from.
gt_bboxes (Tensor) – Ground truth bboxes.
gt_labels (Tensor, optional) – Class labels of ground truth bboxes.
- Returns
Sampling result.
- Return type
Example
>>> from mmdet.core.bbox import RandomSampler >>> from mmdet.core.bbox import AssignResult >>> from mmdet.core.bbox.demodata import ensure_rng, random_boxes >>> rng = ensure_rng(None) >>> assign_result = AssignResult.random(rng=rng) >>> bboxes = random_boxes(assign_result.num_preds, rng=rng) >>> gt_bboxes = random_boxes(assign_result.num_gts, rng=rng) >>> gt_labels = None >>> self = RandomSampler(num=32, pos_fraction=0.5, neg_pos_ub=-1, >>> add_gt_as_proposals=False) >>> self = self.sample(assign_result, bboxes, gt_bboxes, gt_labels)
- class mmdet.core.bbox.BboxOverlaps2D(scale=1.0, dtype=None)[source]¶
2D Overlaps (e.g. IoUs, GIoUs) Calculator.
- class mmdet.core.bbox.CenterRegionAssigner(pos_scale, neg_scale, min_pos_iof=0.01, ignore_gt_scale=0.5, foreground_dominate=False, iou_calculator={'type': 'BboxOverlaps2D'})[source]¶
Assign pixels at the center region of a bbox as positive.
Each proposals will be assigned with -1, 0, or a positive integer indicating the ground truth index. - -1: negative samples - semi-positive numbers: positive sample, index (0-based) of assigned gt
- Parameters
pos_scale (float) – Threshold within which pixels are labelled as positive.
neg_scale (float) – Threshold above which pixels are labelled as positive.
min_pos_iof (float) – Minimum iof of a pixel with a gt to be labelled as positive. Default: 1e-2
ignore_gt_scale (float) – Threshold within which the pixels are ignored when the gt is labelled as shadowed. Default: 0.5
foreground_dominate (bool) – If True, the bbox will be assigned as positive when a gt’s kernel region overlaps with another’s shadowed (ignored) region, otherwise it is set as ignored. Default to False.
- assign(bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]¶
Assign gt to bboxes.
This method assigns gts to every bbox (proposal/anchor), each bbox will be assigned with -1, or a semi-positive number. -1 means negative sample, semi-positive number is the index (0-based) of assigned gt.
- Parameters
bboxes (Tensor) – Bounding boxes to be assigned, shape(n, 4).
gt_bboxes (Tensor) – Groundtruth boxes, shape (k, 4).
gt_bboxes_ignore (tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.
gt_labels (tensor, optional) – Label of gt_bboxes, shape (num_gts,).
- Returns
The assigned result. Note that shadowed_labels of shape (N, 2) is also added as an assign_result attribute. shadowed_labels is a tensor composed of N pairs of anchor_ind, class_label], where N is the number of anchors that lie in the outer region of a gt, anchor_ind is the shadowed anchor index and class_label is the shadowed class label.
- Return type
Example
>>> self = CenterRegionAssigner(0.2, 0.2) >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) >>> gt_bboxes = torch.Tensor([[0, 0, 10, 10]]) >>> assign_result = self.assign(bboxes, gt_bboxes) >>> expected_gt_inds = torch.LongTensor([1, 0]) >>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
- assign_one_hot_gt_indices(is_bbox_in_gt_core, is_bbox_in_gt_shadow, gt_priority=None)[source]¶
Assign only one gt index to each prior box.
Gts with large gt_priority are more likely to be assigned.
- Parameters
is_bbox_in_gt_core (Tensor) – Bool tensor indicating the bbox center is in the core area of a gt (e.g. 0-0.2). Shape: (num_prior, num_gt).
is_bbox_in_gt_shadow (Tensor) – Bool tensor indicating the bbox center is in the shadowed area of a gt (e.g. 0.2-0.5). Shape: (num_prior, num_gt).
gt_priority (Tensor) – Priorities of gts. The gt with a higher priority is more likely to be assigned to the bbox when the bbox match with multiple gts. Shape: (num_gt, ).
- Returns
Returns (assigned_gt_inds, shadowed_gt_inds).
assigned_gt_inds: The assigned gt index of each prior bbox (i.e. index from 1 to num_gts). Shape: (num_prior, ).
shadowed_gt_inds: shadowed gt indices. It is a tensor of shape (num_ignore, 2) with first column being the shadowed prior bbox indices and the second column the shadowed gt indices (1-based).
- Return type
tuple
- get_gt_priorities(gt_bboxes)[source]¶
Get gt priorities according to their areas.
Smaller gt has higher priority.
- Parameters
gt_bboxes (Tensor) – Ground truth boxes, shape (k, 4).
- Returns
The priority of gts so that gts with larger priority is more likely to be assigned. Shape (k, )
- Return type
Tensor
- class mmdet.core.bbox.CombinedSampler(pos_sampler, neg_sampler, **kwargs)[source]¶
A sampler that combines positive sampler and negative sampler.
- class mmdet.core.bbox.DeltaXYWHBBoxCoder(target_means=(0.0, 0.0, 0.0, 0.0), target_stds=(1.0, 1.0, 1.0, 1.0), clip_border=True, add_ctr_clamp=False, ctr_clamp=32)[source]¶
Delta XYWH BBox coder.
Following the practice in R-CNN, this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2).
- Parameters
target_means (Sequence[float]) – Denormalizing means of target for delta coordinates
target_stds (Sequence[float]) – Denormalizing standard deviation of target for delta coordinates
clip_border (bool, optional) – Whether clip the objects outside the border of the image. Defaults to True.
add_ctr_clamp (bool) – Whether to add center clamp, when added, the predicted box is clamped is its center is too far away from the original anchor’s center. Only used by YOLOF. Default False.
ctr_clamp (int) – the maximum pixel shift to clamp. Only used by YOLOF. Default 32.
- decode(bboxes, pred_bboxes, max_shape=None, wh_ratio_clip=0.016)[source]¶
Apply transformation pred_bboxes to boxes.
- Parameters
bboxes (torch.Tensor) – Basic boxes. Shape (B, N, 4) or (N, 4)
pred_bboxes (Tensor) – Encoded offsets with respect to each roi. Has shape (B, N, num_classes * 4) or (B, N, 4) or (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H when rois is a grid of anchors.Offset encoding follows 1.
(Sequence[int] or torch.Tensor or Sequence[ (max_shape) – Sequence[int]],optional): Maximum bounds for boxes, specifies (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then the max_shape should be a Sequence[Sequence[int]] and the length of max_shape should also be B.
wh_ratio_clip (float, optional) – The allowed ratio between width and height.
- Returns
Decoded boxes.
- Return type
torch.Tensor
- encode(bboxes, gt_bboxes)[source]¶
Get box regression transformation deltas that can be used to transform the
bboxes
into thegt_bboxes
.- Parameters
bboxes (torch.Tensor) – Source boxes, e.g., object proposals.
gt_bboxes (torch.Tensor) – Target of the transformation, e.g., ground-truth boxes.
- Returns
Box transformation deltas
- Return type
torch.Tensor
- class mmdet.core.bbox.DistancePointBBoxCoder(clip_border=True)[source]¶
Distance Point BBox coder.
This coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, right) and decode it back to the original.
- Parameters
clip_border (bool, optional) – Whether clip the objects outside the border of the image. Defaults to True.
- decode(points, pred_bboxes, max_shape=None)[source]¶
Decode distance prediction to bounding box.
- Parameters
points (Tensor) – Shape (B, N, 2) or (N, 2).
pred_bboxes (Tensor) – Distance from the given point to 4 boundaries (left, top, right, bottom). Shape (B, N, 4) or (N, 4)
(Sequence[int] or torch.Tensor or Sequence[ (max_shape) – Sequence[int]],optional): Maximum bounds for boxes, specifies (H, W, C) or (H, W). If priors shape is (B, N, 4), then the max_shape should be a Sequence[Sequence[int]], and the length of max_shape should also be B. Default None.
- Returns
Boxes with shape (N, 4) or (B, N, 4)
- Return type
Tensor
- encode(points, gt_bboxes, max_dis=None, eps=0.1)[source]¶
Encode bounding box to distances.
- Parameters
points (Tensor) – Shape (N, 2), The format is [x, y].
gt_bboxes (Tensor) – Shape (N, 4), The format is “xyxy”
max_dis (float) – Upper bound of the distance. Default None.
eps (float) – a small value to ensure target < max_dis, instead <=. Default 0.1.
- Returns
Box transformation deltas. The shape is (N, 4).
- Return type
Tensor
- class mmdet.core.bbox.InstanceBalancedPosSampler(num, pos_fraction, neg_pos_ub=- 1, add_gt_as_proposals=True, **kwargs)[source]¶
Instance balanced sampler that samples equal number of positive samples for each instance.
- class mmdet.core.bbox.IoUBalancedNegSampler(num, pos_fraction, floor_thr=- 1, floor_fraction=0, num_bins=3, **kwargs)[source]¶
IoU Balanced Sampling.
arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)
Sampling proposals according to their IoU. floor_fraction of needed RoIs are sampled from proposals whose IoU are lower than floor_thr randomly. The others are sampled from proposals whose IoU are higher than floor_thr. These proposals are sampled from some bins evenly, which are split by num_bins via IoU evenly.
- Parameters
num (int) – number of proposals.
pos_fraction (float) – fraction of positive proposals.
floor_thr (float) – threshold (minimum) IoU for IoU balanced sampling, set to -1 if all using IoU balanced sampling.
floor_fraction (float) – sampling fraction of proposals under floor_thr.
num_bins (int) – number of bins in IoU balanced sampling.
- sample_via_interval(max_overlaps, full_set, num_expected)[source]¶
Sample according to the iou interval.
- Parameters
max_overlaps (torch.Tensor) – IoU between bounding boxes and ground truth boxes.
full_set (set(int)) – A full set of indices of boxes。
num_expected (int) – Number of expected samples。
- Returns
Indices of samples
- Return type
np.ndarray
- class mmdet.core.bbox.MaxIoUAssigner(pos_iou_thr, neg_iou_thr, min_pos_iou=0.0, gt_max_assign_all=True, ignore_iof_thr=- 1, ignore_wrt_candidates=True, match_low_quality=True, gpu_assign_thr=- 1, iou_calculator={'type': 'BboxOverlaps2D'})[source]¶
Assign a corresponding gt bbox or background to each bbox.
Each proposals will be assigned with -1, or a semi-positive integer indicating the ground truth index.
-1: negative sample, no assigned gt
semi-positive integer: positive sample, index (0-based) of assigned gt
- Parameters
pos_iou_thr (float) – IoU threshold for positive bboxes.
neg_iou_thr (float or tuple) – IoU threshold for negative bboxes.
min_pos_iou (float) – Minimum iou for a bbox to be considered as a positive bbox. Positive samples can have smaller IoU than pos_iou_thr due to the 4th step (assign max IoU sample to each gt). min_pos_iou is set to avoid assigning bboxes that have extremely small iou with GT as positive samples. It brings about 0.3 mAP improvements in 1x schedule but does not affect the performance of 3x schedule. More comparisons can be found in PR #7464.
gt_max_assign_all (bool) – Whether to assign all bboxes with the same highest overlap with some gt to that gt.
ignore_iof_thr (float) – IoF threshold for ignoring bboxes (if gt_bboxes_ignore is specified). Negative values mean not ignoring any bboxes.
ignore_wrt_candidates (bool) – Whether to compute the iof between bboxes and gt_bboxes_ignore, or the contrary.
match_low_quality (bool) – Whether to allow low quality matches. This is usually allowed for RPN and single stage detectors, but not allowed in the second stage. Details are demonstrated in Step 4.
gpu_assign_thr (int) – The upper bound of the number of GT for GPU assign. When the number of gt is above this threshold, will assign on CPU device. Negative values mean not assign on CPU.
- assign(bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]¶
Assign gt to bboxes.
This method assign a gt bbox to every bbox (proposal/anchor), each bbox will be assigned with -1, or a semi-positive number. -1 means negative sample, semi-positive number is the index (0-based) of assigned gt. The assignment is done in following steps, the order matters.
assign every bbox to the background
assign proposals whose iou with all gts < neg_iou_thr to 0
for each bbox, if the iou with its nearest gt >= pos_iou_thr, assign it to that bbox
for each gt bbox, assign its nearest proposals (may be more than one) to itself
- Parameters
bboxes (Tensor) – Bounding boxes to be assigned, shape(n, 4).
gt_bboxes (Tensor) – Groundtruth boxes, shape (k, 4).
gt_bboxes_ignore (Tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.
gt_labels (Tensor, optional) – Label of gt_bboxes, shape (k, ).
- Returns
The assign result.
- Return type
Example
>>> self = MaxIoUAssigner(0.5, 0.5) >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]]) >>> assign_result = self.assign(bboxes, gt_bboxes) >>> expected_gt_inds = torch.LongTensor([1, 0]) >>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
- class mmdet.core.bbox.OHEMSampler(num, pos_fraction, context, neg_pos_ub=- 1, add_gt_as_proposals=True, loss_key='loss_cls', **kwargs)[source]¶
Online Hard Example Mining Sampler described in Training Region-based Object Detectors with Online Hard Example Mining.
- class mmdet.core.bbox.PseudoSampler(**kwargs)[source]¶
A pseudo sampler that does not do sampling actually.
- sample(assign_result, bboxes, gt_bboxes, *args, **kwargs)[source]¶
Directly returns the positive and negative indices of samples.
- Parameters
assign_result (
AssignResult
) – Assigned resultsbboxes (torch.Tensor) – Bounding boxes
gt_bboxes (torch.Tensor) – Ground truth boxes
- Returns
sampler results
- Return type
- class mmdet.core.bbox.RandomSampler(num, pos_fraction, neg_pos_ub=- 1, add_gt_as_proposals=True, **kwargs)[source]¶
Random sampler.
- Parameters
num (int) – Number of samples
pos_fraction (float) – Fraction of positive samples
neg_pos_up (int, optional) – Upper bound number of negative and positive samples. Defaults to -1.
add_gt_as_proposals (bool, optional) – Whether to add ground truth boxes as proposals. Defaults to True.
- random_choice(gallery, num)[source]¶
Random select some elements from the gallery.
If gallery is a Tensor, the returned indices will be a Tensor; If gallery is a ndarray or list, the returned indices will be a ndarray.
- Parameters
gallery (Tensor | ndarray | list) – indices pool.
num (int) – expected sample num.
- Returns
sampled indices.
- Return type
Tensor or ndarray
- class mmdet.core.bbox.RegionAssigner(center_ratio=0.2, ignore_ratio=0.5)[source]¶
Assign a corresponding gt bbox or background to each bbox.
Each proposals will be assigned with -1, 0, or a positive integer indicating the ground truth index.
-1: don’t care
0: negative sample, no assigned gt
positive integer: positive sample, index (1-based) of assigned gt
- Parameters
center_ratio – ratio of the region in the center of the bbox to define positive sample.
ignore_ratio – ratio of the region to define ignore samples.
- assign(mlvl_anchors, mlvl_valid_flags, gt_bboxes, img_meta, featmap_sizes, anchor_scale, anchor_strides, gt_bboxes_ignore=None, gt_labels=None, allowed_border=0)[source]¶
Assign gt to anchors.
This method assign a gt bbox to every bbox (proposal/anchor), each bbox will be assigned with -1, 0, or a positive number. -1 means don’t care, 0 means negative sample, positive number is the index (1-based) of assigned gt.
The assignment is done in following steps, and the order matters.
Assign every anchor to 0 (negative)
(For each gt_bboxes) Compute ignore flags based on ignore_region then assign -1 to anchors w.r.t. ignore flags
(For each gt_bboxes) Compute pos flags based on center_region then assign gt_bboxes to anchors w.r.t. pos flags
(For each gt_bboxes) Compute ignore flags based on adjacent anchor level then assign -1 to anchors w.r.t. ignore flags
Assign anchor outside of image to -1
- Parameters
mlvl_anchors (list[Tensor]) – Multi level anchors.
mlvl_valid_flags (list[Tensor]) – Multi level valid flags.
gt_bboxes (Tensor) – Ground truth bboxes of image
img_meta (dict) – Meta info of image.
featmap_sizes (list[Tensor]) – Feature mapsize each level
anchor_scale (int) – Scale of the anchor.
anchor_strides (list[int]) – Stride of the anchor.
gt_bboxes – Groundtruth boxes, shape (k, 4).
gt_bboxes_ignore (Tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.
gt_labels (Tensor, optional) – Label of gt_bboxes, shape (k, ).
allowed_border (int, optional) – The border to allow the valid anchor. Defaults to 0.
- Returns
The assign result.
- Return type
- class mmdet.core.bbox.SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, assign_result, gt_flags)[source]¶
Bbox sampling result.
Example
>>> # xdoctest: +IGNORE_WANT >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA >>> self = SamplingResult.random(rng=10) >>> print(f'self = {self}') self = <SamplingResult({ 'neg_bboxes': torch.Size([12, 4]), 'neg_inds': tensor([ 0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12]), 'num_gts': 4, 'pos_assigned_gt_inds': tensor([], dtype=torch.int64), 'pos_bboxes': torch.Size([0, 4]), 'pos_inds': tensor([], dtype=torch.int64), 'pos_is_gt': tensor([], dtype=torch.uint8) })>
- property bboxes¶
concatenated positive and negative boxes
- Type
torch.Tensor
- property info¶
Returns a dictionary of info about the object.
- classmethod random(rng=None, **kwargs)[source]¶
- Parameters
rng (None | int | numpy.random.RandomState) – seed or state.
kwargs (keyword arguments) –
num_preds: number of predicted boxes
num_gts: number of true boxes
p_ignore (float): probability of a predicted box assigned to an ignored truth.
p_assigned (float): probability of a predicted box not being assigned.
p_use_label (float | bool): with labels or not.
- Returns
Randomly generated sampling result.
- Return type
Example
>>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA >>> self = SamplingResult.random() >>> print(self.__dict__)
- class mmdet.core.bbox.ScoreHLRSampler(num, pos_fraction, context, neg_pos_ub=- 1, add_gt_as_proposals=True, k=0.5, bias=0, score_thr=0.05, iou_thr=0.5, **kwargs)[source]¶
Importance-based Sample Reweighting (ISR_N), described in Prime Sample Attention in Object Detection.
Score hierarchical local rank (HLR) differentiates with RandomSampler in negative part. It firstly computes Score-HLR in a two-step way, then linearly maps score hlr to the loss weights.
- Parameters
num (int) – Total number of sampled RoIs.
pos_fraction (float) – Fraction of positive samples.
context (
BaseRoIHead
) – RoI head that the sampler belongs to.neg_pos_ub (int) – Upper bound of the ratio of num negative to num positive, -1 means no upper bound.
add_gt_as_proposals (bool) – Whether to add ground truth as proposals.
k (float) – Power of the non-linear mapping.
bias (float) – Shift of the non-linear mapping.
score_thr (float) – Minimum score that a negative sample is to be considered as valid bbox.
- static random_choice(gallery, num)[source]¶
Randomly select some elements from the gallery.
If gallery is a Tensor, the returned indices will be a Tensor; If gallery is a ndarray or list, the returned indices will be a ndarray.
- Parameters
gallery (Tensor | ndarray | list) – indices pool.
num (int) – expected sample num.
- Returns
sampled indices.
- Return type
Tensor or ndarray
- sample(assign_result, bboxes, gt_bboxes, gt_labels=None, img_meta=None, **kwargs)[source]¶
Sample positive and negative bboxes.
This is a simple implementation of bbox sampling given candidates, assigning results and ground truth bboxes.
- Parameters
assign_result (
AssignResult
) – Bbox assigning results.bboxes (Tensor) – Boxes to be sampled from.
gt_bboxes (Tensor) – Ground truth bboxes.
gt_labels (Tensor, optional) – Class labels of ground truth bboxes.
- Returns
- Sampling result and negative
label weights.
- Return type
tuple[
SamplingResult
, Tensor]
- class mmdet.core.bbox.TBLRBBoxCoder(normalizer=4.0, clip_border=True)[source]¶
TBLR BBox coder.
Following the practice in FSAF, this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, right) and decode it back to the original.
- Parameters
normalizer (list | float) – Normalization factor to be divided with when coding the coordinates. If it is a list, it should have length of 4 indicating normalization factor in tblr dims. Otherwise it is a unified float factor for all dims. Default: 4.0
clip_border (bool, optional) – Whether clip the objects outside the border of the image. Defaults to True.
- decode(bboxes, pred_bboxes, max_shape=None)[source]¶
Apply transformation pred_bboxes to boxes.
- Parameters
bboxes (torch.Tensor) – Basic boxes.Shape (B, N, 4) or (N, 4)
pred_bboxes (torch.Tensor) – Encoded boxes with shape (B, N, 4) or (N, 4)
(Sequence[int] or torch.Tensor or Sequence[ (max_shape) – Sequence[int]],optional): Maximum bounds for boxes, specifies (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then the max_shape should be a Sequence[Sequence[int]] and the length of max_shape should also be B.
- Returns
Decoded boxes.
- Return type
torch.Tensor
- encode(bboxes, gt_bboxes)[source]¶
Get box regression transformation deltas that can be used to transform the
bboxes
into thegt_bboxes
in the (top, left, bottom, right) order.- Parameters
bboxes (torch.Tensor) – source boxes, e.g., object proposals.
gt_bboxes (torch.Tensor) – target of the transformation, e.g., ground truth boxes.
- Returns
Box transformation deltas
- Return type
torch.Tensor
- mmdet.core.bbox.bbox2distance(points, bbox, max_dis=None, eps=0.1)[source]¶
Decode bounding box based on distances.
- Parameters
points (Tensor) – Shape (n, 2), [x, y].
bbox (Tensor) – Shape (n, 4), “xyxy” format
max_dis (float) – Upper bound of the distance.
eps (float) – a small value to ensure target < max_dis, instead <=
- Returns
Decoded distances.
- Return type
Tensor
- mmdet.core.bbox.bbox2result(bboxes, labels, num_classes)[source]¶
Convert detection results to a list of numpy arrays.
- Parameters
bboxes (torch.Tensor | np.ndarray) – shape (n, 5)
labels (torch.Tensor | np.ndarray) – shape (n, )
num_classes (int) – class number, including background class
- Returns
bbox results of each class
- Return type
list(ndarray)
- mmdet.core.bbox.bbox2roi(bbox_list)[source]¶
Convert a list of bboxes to roi format.
- Parameters
bbox_list (list[Tensor]) – a list of bboxes corresponding to a batch of images.
- Returns
shape (n, 5), [batch_ind, x1, y1, x2, y2]
- Return type
Tensor
- mmdet.core.bbox.bbox_cxcywh_to_xyxy(bbox)[source]¶
Convert bbox coordinates from (cx, cy, w, h) to (x1, y1, x2, y2).
- Parameters
bbox (Tensor) – Shape (n, 4) for bboxes.
- Returns
Converted bboxes.
- Return type
Tensor
- mmdet.core.bbox.bbox_flip(bboxes, img_shape, direction='horizontal')[source]¶
Flip bboxes horizontally or vertically.
- Parameters
bboxes (Tensor) – Shape (…, 4*k)
img_shape (tuple) – Image shape.
direction (str) – Flip direction, options are “horizontal”, “vertical”, “diagonal”. Default: “horizontal”
- Returns
Flipped bboxes.
- Return type
Tensor
- mmdet.core.bbox.bbox_mapping(bboxes, img_shape, scale_factor, flip, flip_direction='horizontal')[source]¶
Map bboxes from the original image scale to testing scale.
- mmdet.core.bbox.bbox_mapping_back(bboxes, img_shape, scale_factor, flip, flip_direction='horizontal')[source]¶
Map bboxes from testing scale to original image scale.
- mmdet.core.bbox.bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-06)[source]¶
Calculate overlap between two set of bboxes.
FP16 Contributed by https://github.com/open-mmlab/mmdetection/pull/4889 .. note:
Assume bboxes1 is M x 4, bboxes2 is N x 4, when mode is 'iou', there are some new generated variable when calculating IOU using bbox_overlaps function: 1) is_aligned is False area1: M x 1 area2: N x 1 lt: M x N x 2 rb: M x N x 2 wh: M x N x 2 overlap: M x N x 1 union: M x N x 1 ious: M x N x 1 Total memory: S = (9 x N x M + N + M) * 4 Byte, When using FP16, we can reduce: R = (9 x N x M + N + M) * 4 / 2 Byte R large than (N + M) * 4 * 2 is always true when N and M >= 1. Obviously, N + M <= N * M < 3 * N * M, when N >=2 and M >=2, N + 1 < 3 * N, when N or M is 1. Given M = 40 (ground truth), N = 400000 (three anchor boxes in per grid, FPN, R-CNNs), R = 275 MB (one times) A special case (dense detection), M = 512 (ground truth), R = 3516 MB = 3.43 GB When the batch size is B, reduce: B x R Therefore, CUDA memory runs out frequently. Experiments on GeForce RTX 2080Ti (11019 MiB): | dtype | M | N | Use | Real | Ideal | |:----:|:----:|:----:|:----:|:----:|:----:| | FP32 | 512 | 400000 | 8020 MiB | -- | -- | | FP16 | 512 | 400000 | 4504 MiB | 3516 MiB | 3516 MiB | | FP32 | 40 | 400000 | 1540 MiB | -- | -- | | FP16 | 40 | 400000 | 1264 MiB | 276MiB | 275 MiB | 2) is_aligned is True area1: N x 1 area2: N x 1 lt: N x 2 rb: N x 2 wh: N x 2 overlap: N x 1 union: N x 1 ious: N x 1 Total memory: S = 11 x N * 4 Byte When using FP16, we can reduce: R = 11 x N * 4 / 2 Byte So do the 'giou' (large than 'iou'). Time-wise, FP16 is generally faster than FP32. When gpu_assign_thr is not -1, it takes more time on cpu but not reduce memory. There, we can reduce half the memory and keep the speed.
If
is_aligned
isFalse
, then calculate the overlaps between each bbox of bboxes1 and bboxes2, otherwise the overlaps between each aligned pair of bboxes1 and bboxes2.- Parameters
bboxes1 (Tensor) – shape (B, m, 4) in <x1, y1, x2, y2> format or empty.
bboxes2 (Tensor) – shape (B, n, 4) in <x1, y1, x2, y2> format or empty. B indicates the batch dim, in shape (B1, B2, …, Bn). If
is_aligned
isTrue
, then m and n must be equal.mode (str) – “iou” (intersection over union), “iof” (intersection over foreground) or “giou” (generalized intersection over union). Default “iou”.
is_aligned (bool, optional) – If True, then m and n must be equal. Default False.
eps (float, optional) – A value added to the denominator for numerical stability. Default 1e-6.
- Returns
shape (m, n) if
is_aligned
is False else shape (m,)- Return type
Tensor
Example
>>> bboxes1 = torch.FloatTensor([ >>> [0, 0, 10, 10], >>> [10, 10, 20, 20], >>> [32, 32, 38, 42], >>> ]) >>> bboxes2 = torch.FloatTensor([ >>> [0, 0, 10, 20], >>> [0, 10, 10, 19], >>> [10, 10, 20, 20], >>> ]) >>> overlaps = bbox_overlaps(bboxes1, bboxes2) >>> assert overlaps.shape == (3, 3) >>> overlaps = bbox_overlaps(bboxes1, bboxes2, is_aligned=True) >>> assert overlaps.shape == (3, )
Example
>>> empty = torch.empty(0, 4) >>> nonempty = torch.FloatTensor([[0, 0, 10, 9]]) >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0)
- mmdet.core.bbox.bbox_rescale(bboxes, scale_factor=1.0)[source]¶
Rescale bounding box w.r.t. scale_factor.
- Parameters
bboxes (Tensor) – Shape (n, 4) for bboxes or (n, 5) for rois
scale_factor (float) – rescale factor
- Returns
Rescaled bboxes.
- Return type
Tensor
- mmdet.core.bbox.bbox_xyxy_to_cxcywh(bbox)[source]¶
Convert bbox coordinates from (x1, y1, x2, y2) to (cx, cy, w, h).
- Parameters
bbox (Tensor) – Shape (n, 4) for bboxes.
- Returns
Converted bboxes.
- Return type
Tensor
- mmdet.core.bbox.distance2bbox(points, distance, max_shape=None)[source]¶
Decode distance prediction to bounding box.
- Parameters
points (Tensor) – Shape (B, N, 2) or (N, 2).
distance (Tensor) – Distance from the given point to 4 boundaries (left, top, right, bottom). Shape (B, N, 4) or (N, 4)
(Sequence[int] or torch.Tensor or Sequence[ (max_shape) – Sequence[int]],optional): Maximum bounds for boxes, specifies (H, W, C) or (H, W). If priors shape is (B, N, 4), then the max_shape should be a Sequence[Sequence[int]] and the length of max_shape should also be B.
- Returns
Boxes with shape (N, 4) or (B, N, 4)
- Return type
Tensor
export¶
mask¶
- class mmdet.core.mask.BaseInstanceMasks[source]¶
Base class for instance masks.
- abstract property areas¶
areas of each instance.
- Type
ndarray
- abstract crop(bbox)[source]¶
Crop each mask by the given bbox.
- Parameters
bbox (ndarray) – Bbox in format [x1, y1, x2, y2], shape (4, ).
- Returns
The cropped masks.
- Return type
- abstract crop_and_resize(bboxes, out_shape, inds, device, interpolation='bilinear', binarize=True)[source]¶
Crop and resize masks by the given bboxes.
This function is mainly used in mask targets computation. It firstly align mask to bboxes by assigned_inds, then crop mask by the assigned bbox and resize to the size of (mask_h, mask_w)
- Parameters
bboxes (Tensor) – Bboxes in format [x1, y1, x2, y2], shape (N, 4)
out_shape (tuple[int]) – Target (h, w) of resized mask
inds (ndarray) – Indexes to assign masks to each bbox, shape (N,) and values should be between [0, num_masks - 1].
device (str) – Device of bboxes
interpolation (str) – See mmcv.imresize
binarize (bool) – if True fractional values are rounded to 0 or 1 after the resize operation. if False and unsupported an error will be raised. Defaults to True.
- Returns
the cropped and resized masks.
- Return type
- abstract flip(flip_direction='horizontal')[source]¶
Flip masks alone the given direction.
- Parameters
flip_direction (str) – Either ‘horizontal’ or ‘vertical’.
- Returns
The flipped masks.
- Return type
- abstract pad(out_shape, pad_val)[source]¶
Pad masks to the given size of (h, w).
- Parameters
out_shape (tuple[int]) – Target (h, w) of padded mask.
pad_val (int) – The padded value.
- Returns
The padded masks.
- Return type
- abstract rescale(scale, interpolation='nearest')[source]¶
Rescale masks as large as possible while keeping the aspect ratio. For details can refer to mmcv.imrescale.
- Parameters
scale (tuple[int]) – The maximum size (h, w) of rescaled mask.
interpolation (str) – Same as
mmcv.imrescale()
.
- Returns
The rescaled masks.
- Return type
- abstract resize(out_shape, interpolation='nearest')[source]¶
Resize masks to the given out_shape.
- Parameters
out_shape – Target (h, w) of resized mask.
interpolation (str) – See
mmcv.imresize()
.
- Returns
The resized masks.
- Return type
- abstract rotate(out_shape, angle, center=None, scale=1.0, fill_val=0)[source]¶
Rotate the masks.
- Parameters
out_shape (tuple[int]) – Shape for output mask, format (h, w).
angle (int | float) – Rotation angle in degrees. Positive values mean counter-clockwise rotation.
center (tuple[float], optional) – Center point (w, h) of the rotation in source image. If not specified, the center of the image will be used.
scale (int | float) – Isotropic scale factor.
fill_val (int | float) – Border value. Default 0 for masks.
- Returns
Rotated masks.
- shear(out_shape, magnitude, direction='horizontal', border_value=0, interpolation='bilinear')[source]¶
Shear the masks.
- Parameters
out_shape (tuple[int]) – Shape for output mask, format (h, w).
magnitude (int | float) – The magnitude used for shear.
direction (str) – The shear direction, either “horizontal” or “vertical”.
border_value (int | tuple[int]) – Value used in case of a constant border. Default 0.
interpolation (str) – Same as in
mmcv.imshear()
.
- Returns
Sheared masks.
- Return type
ndarray
- abstract to_ndarray()[source]¶
Convert masks to the format of ndarray.
- Returns
Converted masks in the format of ndarray.
- Return type
ndarray
- abstract to_tensor(dtype, device)[source]¶
Convert masks to the format of Tensor.
- Parameters
dtype (str) – Dtype of converted mask.
device (torch.device) – Device of converted masks.
- Returns
Converted masks in the format of Tensor.
- Return type
Tensor
- abstract translate(out_shape, offset, direction='horizontal', fill_val=0, interpolation='bilinear')[source]¶
Translate the masks.
- Parameters
out_shape (tuple[int]) – Shape for output mask, format (h, w).
offset (int | float) – The offset for translate.
direction (str) – The translate direction, either “horizontal” or “vertical”.
fill_val (int | float) – Border value. Default 0.
interpolation (str) – Same as
mmcv.imtranslate()
.
- Returns
Translated masks.
- class mmdet.core.mask.BitmapMasks(masks, height, width)[source]¶
This class represents masks in the form of bitmaps.
- Parameters
masks (ndarray) – ndarray of masks in shape (N, H, W), where N is the number of objects.
height (int) – height of masks
width (int) – width of masks
Example
>>> from mmdet.core.mask.structures import * # NOQA >>> num_masks, H, W = 3, 32, 32 >>> rng = np.random.RandomState(0) >>> masks = (rng.rand(num_masks, H, W) > 0.1).astype(np.int) >>> self = BitmapMasks(masks, height=H, width=W)
>>> # demo crop_and_resize >>> num_boxes = 5 >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) >>> out_shape = (14, 14) >>> inds = torch.randint(0, len(self), size=(num_boxes,)) >>> device = 'cpu' >>> interpolation = 'bilinear' >>> new = self.crop_and_resize( ... bboxes, out_shape, inds, device, interpolation) >>> assert len(new) == num_boxes >>> assert new.height, new.width == out_shape
- property areas¶
- crop_and_resize(bboxes, out_shape, inds, device='cpu', interpolation='bilinear', binarize=True)[source]¶
- classmethod random(num_masks=3, height=32, width=32, dtype=<class 'numpy.uint8'>, rng=None)[source]¶
Generate random bitmap masks for demo / testing purposes.
Example
>>> from mmdet.core.mask.structures import BitmapMasks >>> self = BitmapMasks.random() >>> print('self = {}'.format(self)) self = BitmapMasks(num_masks=3, height=32, width=32)
- rotate(out_shape, angle, center=None, scale=1.0, fill_val=0)[source]¶
Rotate the BitmapMasks.
- Parameters
out_shape (tuple[int]) – Shape for output mask, format (h, w).
angle (int | float) – Rotation angle in degrees. Positive values mean counter-clockwise rotation.
center (tuple[float], optional) – Center point (w, h) of the rotation in source image. If not specified, the center of the image will be used.
scale (int | float) – Isotropic scale factor.
fill_val (int | float) – Border value. Default 0 for masks.
- Returns
Rotated BitmapMasks.
- Return type
- shear(out_shape, magnitude, direction='horizontal', border_value=0, interpolation='bilinear')[source]¶
Shear the BitmapMasks.
- Parameters
out_shape (tuple[int]) – Shape for output mask, format (h, w).
magnitude (int | float) – The magnitude used for shear.
direction (str) – The shear direction, either “horizontal” or “vertical”.
border_value (int | tuple[int]) – Value used in case of a constant border.
interpolation (str) – Same as in
mmcv.imshear()
.
- Returns
The sheared masks.
- Return type
- translate(out_shape, offset, direction='horizontal', fill_val=0, interpolation='bilinear')[source]¶
Translate the BitmapMasks.
- Parameters
out_shape (tuple[int]) – Shape for output mask, format (h, w).
offset (int | float) – The offset for translate.
direction (str) – The translate direction, either “horizontal” or “vertical”.
fill_val (int | float) – Border value. Default 0 for masks.
interpolation (str) – Same as
mmcv.imtranslate()
.
- Returns
Translated BitmapMasks.
- Return type
Example
>>> from mmdet.core.mask.structures import BitmapMasks >>> self = BitmapMasks.random(dtype=np.uint8) >>> out_shape = (32, 32) >>> offset = 4 >>> direction = 'horizontal' >>> fill_val = 0 >>> interpolation = 'bilinear' >>> # Note, There seem to be issues when: >>> # * out_shape is different than self's shape >>> # * the mask dtype is not supported by cv2.AffineWarp >>> new = self.translate(out_shape, offset, direction, fill_val, >>> interpolation) >>> assert len(new) == len(self) >>> assert new.height, new.width == out_shape
- class mmdet.core.mask.PolygonMasks(masks, height, width)[source]¶
This class represents masks in the form of polygons.
Polygons is a list of three levels. The first level of the list corresponds to objects, the second level to the polys that compose the object, the third level to the poly coordinates
- Parameters
masks (list[list[ndarray]]) – The first level of the list corresponds to objects, the second level to the polys that compose the object, the third level to the poly coordinates
height (int) – height of masks
width (int) – width of masks
Example
>>> from mmdet.core.mask.structures import * # NOQA >>> masks = [ >>> [ np.array([0, 0, 10, 0, 10, 10., 0, 10, 0, 0]) ] >>> ] >>> height, width = 16, 16 >>> self = PolygonMasks(masks, height, width)
>>> # demo translate >>> new = self.translate((16, 16), 4., direction='horizontal') >>> assert np.all(new.masks[0][0][1::2] == masks[0][0][1::2]) >>> assert np.all(new.masks[0][0][0::2] == masks[0][0][0::2] + 4)
>>> # demo crop_and_resize >>> num_boxes = 3 >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) >>> out_shape = (16, 16) >>> inds = torch.randint(0, len(self), size=(num_boxes,)) >>> device = 'cpu' >>> interpolation = 'bilinear' >>> new = self.crop_and_resize( ... bboxes, out_shape, inds, device, interpolation) >>> assert len(new) == num_boxes >>> assert new.height, new.width == out_shape
- property areas¶
Compute areas of masks.
This func is modified from detectron2. The function only works with Polygons using the shoelace formula.
- Returns
areas of each instance
- Return type
ndarray
- crop_and_resize(bboxes, out_shape, inds, device='cpu', interpolation='bilinear', binarize=True)[source]¶
- classmethod random(num_masks=3, height=32, width=32, n_verts=5, dtype=<class 'numpy.float32'>, rng=None)[source]¶
Generate random polygon masks for demo / testing purposes.
Adapted from 1
References
- 1(1,2)
https://gitlab.kitware.com/computer-vision/kwimage/-/blob/928cae35ca8/kwimage/structs/polygon.py#L379 # noqa: E501
Example
>>> from mmdet.core.mask.structures import PolygonMasks >>> self = PolygonMasks.random() >>> print('self = {}'.format(self))
- shear(out_shape, magnitude, direction='horizontal', border_value=0, interpolation='bilinear')[source]¶
- translate(out_shape, offset, direction='horizontal', fill_val=None, interpolation=None)[source]¶
Translate the PolygonMasks.
Example
>>> self = PolygonMasks.random(dtype=np.int) >>> out_shape = (self.height, self.width) >>> new = self.translate(out_shape, 4., direction='horizontal') >>> assert np.all(new.masks[0][0][1::2] == self.masks[0][0][1::2]) >>> assert np.all(new.masks[0][0][0::2] == self.masks[0][0][0::2] + 4) # noqa: E501
- mmdet.core.mask.encode_mask_results(mask_results)[source]¶
Encode bitmap mask to RLE code.
- Parameters
mask_results (list | tuple[list]) – bitmap mask results. In mask scoring rcnn, mask_results is a tuple of (segm_results, segm_cls_score).
- Returns
RLE encoded mask.
- Return type
list | tuple
- mmdet.core.mask.mask2bbox(masks)[source]¶
Obtain tight bounding boxes of binary masks.
- Parameters
masks (Tensor) – Binary mask of shape (n, h, w).
- Returns
Bboxe with shape (n, 4) of positive region in binary mask.
- Return type
Tensor
- mmdet.core.mask.mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list, cfg)[source]¶
Compute mask target for positive proposals in multiple images.
- Parameters
pos_proposals_list (list[Tensor]) – Positive proposals in multiple images.
pos_assigned_gt_inds_list (list[Tensor]) – Assigned GT indices for each positive proposals.
gt_masks_list (list[
BaseInstanceMasks
]) – Ground truth masks of each image.cfg (dict) – Config dict that specifies the mask size.
- Returns
Mask target of each image.
- Return type
list[Tensor]
Example
>>> import mmcv >>> import mmdet >>> from mmdet.core.mask import BitmapMasks >>> from mmdet.core.mask.mask_target import * >>> H, W = 17, 18 >>> cfg = mmcv.Config({'mask_size': (13, 14)}) >>> rng = np.random.RandomState(0) >>> # Positive proposals (tl_x, tl_y, br_x, br_y) for each image >>> pos_proposals_list = [ >>> torch.Tensor([ >>> [ 7.2425, 5.5929, 13.9414, 14.9541], >>> [ 7.3241, 3.6170, 16.3850, 15.3102], >>> ]), >>> torch.Tensor([ >>> [ 4.8448, 6.4010, 7.0314, 9.7681], >>> [ 5.9790, 2.6989, 7.4416, 4.8580], >>> [ 0.0000, 0.0000, 0.1398, 9.8232], >>> ]), >>> ] >>> # Corresponding class index for each proposal for each image >>> pos_assigned_gt_inds_list = [ >>> torch.LongTensor([7, 0]), >>> torch.LongTensor([5, 4, 1]), >>> ] >>> # Ground truth mask for each true object for each image >>> gt_masks_list = [ >>> BitmapMasks(rng.rand(8, H, W), height=H, width=W), >>> BitmapMasks(rng.rand(6, H, W), height=H, width=W), >>> ] >>> mask_targets = mask_target( >>> pos_proposals_list, pos_assigned_gt_inds_list, >>> gt_masks_list, cfg) >>> assert mask_targets.shape == (5,) + cfg['mask_size']
- mmdet.core.mask.split_combined_polys(polys, poly_lens, polys_per_mask)[source]¶
Split the combined 1-D polys into masks.
A mask is represented as a list of polys, and a poly is represented as a 1-D array. In dataset, all masks are concatenated into a single 1-D tensor. Here we need to split the tensor into original representations.
- Parameters
polys (list) – a list (length = image num) of 1-D tensors
poly_lens (list) – a list (length = image num) of poly length
polys_per_mask (list) – a list (length = image num) of poly number of each mask
- Returns
a list (length = image num) of list (length = mask num) of list (length = poly num) of numpy array.
- Return type
list
evaluation¶
- mmdet.core.evaluation.average_precision(recalls, precisions, mode='area')[source]¶
Calculate average precision (for single or multiple scales).
- Parameters
recalls (ndarray) – shape (num_scales, num_dets) or (num_dets, )
precisions (ndarray) – shape (num_scales, num_dets) or (num_dets, )
mode (str) – ‘area’ or ‘11points’, ‘area’ means calculating the area under precision-recall curve, ‘11points’ means calculating the average precision of recalls at [0, 0.1, …, 1]
- Returns
calculated average precision
- Return type
float or ndarray
- mmdet.core.evaluation.eval_map(det_results, annotations, scale_ranges=None, iou_thr=0.5, ioa_thr=None, dataset=None, logger=None, tpfp_fn=None, nproc=4, use_legacy_coordinate=False, use_group_of=False)[source]¶
Evaluate mAP of a dataset.
- Parameters
det_results (list[list]) – [[cls1_det, cls2_det, …], …]. The outer list indicates images, and the inner list indicates per-class detected bboxes.
annotations (list[dict]) –
Ground truth annotations where each item of the list indicates an image. Keys of annotations are:
bboxes: numpy array of shape (n, 4)
labels: numpy array of shape (n, )
bboxes_ignore (optional): numpy array of shape (k, 4)
labels_ignore (optional): numpy array of shape (k, )
scale_ranges (list[tuple] | None) – Range of scales to be evaluated, in the format [(min1, max1), (min2, max2), …]. A range of (32, 64) means the area range between (32**2, 64**2). Default: None.
iou_thr (float) – IoU threshold to be considered as matched. Default: 0.5.
ioa_thr (float | None) – IoA threshold to be considered as matched, which only used in OpenImages evaluation. Default: None.
dataset (list[str] | str | None) – Dataset name or dataset classes, there are minor differences in metrics for different datasets, e.g. “voc07”, “imagenet_det”, etc. Default: None.
logger (logging.Logger | str | None) – The way to print the mAP summary. See mmcv.utils.print_log() for details. Default: None.
tpfp_fn (callable | None) – The function used to determine true/ false positives. If None,
tpfp_default()
is used as default unless dataset is ‘det’ or ‘vid’ (tpfp_imagenet()
in this case). If it is given as a function, then this function is used to evaluate tp & fp. Default None.nproc (int) – Processes used for computing TP and FP. Default: 4.
use_legacy_coordinate (bool) – Whether to use coordinate system in mmdet v1.x. which means width, height should be calculated as ‘x2 - x1 + 1` and ‘y2 - y1 + 1’ respectively. Default: False.
use_group_of (bool) – Whether to use group of when calculate TP and FP, which only used in OpenImages evaluation. Default: False.
- Returns
(mAP, [dict, dict, …])
- Return type
tuple
- mmdet.core.evaluation.eval_recalls(gts, proposals, proposal_nums=None, iou_thrs=0.5, logger=None, use_legacy_coordinate=False)[source]¶
Calculate recalls.
- Parameters
gts (list[ndarray]) – a list of arrays of shape (n, 4)
proposals (list[ndarray]) – a list of arrays of shape (k, 4) or (k, 5)
proposal_nums (int | Sequence[int]) – Top N proposals to be evaluated.
iou_thrs (float | Sequence[float]) – IoU thresholds. Default: 0.5.
logger (logging.Logger | str | None) – The way to print the recall summary. See mmcv.utils.print_log() for details. Default: None.
use_legacy_coordinate (bool) – Whether use coordinate system in mmdet v1.x. “1” was added to both height and width which means w, h should be computed as ‘x2 - x1 + 1` and ‘y2 - y1 + 1’. Default: False.
- Returns
recalls of different ious and proposal nums
- Return type
ndarray
- mmdet.core.evaluation.plot_iou_recall(recalls, iou_thrs)[source]¶
Plot IoU-Recalls curve.
- Parameters
recalls (ndarray or list) – shape (k,)
iou_thrs (ndarray or list) – same shape as recalls
- mmdet.core.evaluation.plot_num_recall(recalls, proposal_nums)[source]¶
Plot Proposal_num-Recalls curve.
- Parameters
recalls (ndarray or list) – shape (k,)
proposal_nums (ndarray or list) – same shape as recalls
- mmdet.core.evaluation.print_map_summary(mean_ap, results, dataset=None, scale_ranges=None, logger=None)[source]¶
Print mAP and results of each class.
A table will be printed to show the gts/dets/recall/AP of each class and the mAP.
- Parameters
mean_ap (float) – Calculated from eval_map().
results (list[dict]) – Calculated from eval_map().
dataset (list[str] | str | None) – Dataset name or dataset classes.
scale_ranges (list[tuple] | None) – Range of scales to be evaluated.
logger (logging.Logger | str | None) – The way to print the mAP summary. See mmcv.utils.print_log() for details. Default: None.
- mmdet.core.evaluation.print_recall_summary(recalls, proposal_nums, iou_thrs, row_idxs=None, col_idxs=None, logger=None)[source]¶
Print recalls in a table.
- Parameters
recalls (ndarray) – calculated from bbox_recalls
proposal_nums (ndarray or list) – top N proposals
iou_thrs (ndarray or list) – iou thresholds
row_idxs (ndarray) – which rows(proposal nums) to print
col_idxs (ndarray) – which cols(iou thresholds) to print
logger (logging.Logger | str | None) – The way to print the recall summary. See mmcv.utils.print_log() for details. Default: None.
post_processing¶
- mmdet.core.post_processing.fast_nms(multi_bboxes, multi_scores, multi_coeffs, score_thr, iou_thr, top_k, max_num=- 1)[source]¶
Fast NMS in YOLACT.
Fast NMS allows already-removed detections to suppress other detections so that every instance can be decided to be kept or discarded in parallel, which is not possible in traditional NMS. This relaxation allows us to implement Fast NMS entirely in standard GPU-accelerated matrix operations.
- Parameters
multi_bboxes (Tensor) – shape (n, #class*4) or (n, 4)
multi_scores (Tensor) – shape (n, #class+1), where the last column contains scores of the background class, but this will be ignored.
multi_coeffs (Tensor) – shape (n, #class*coeffs_dim).
score_thr (float) – bbox threshold, bboxes with scores lower than it will not be considered.
iou_thr (float) – IoU threshold to be considered as conflicted.
top_k (int) – if there are more than top_k bboxes before NMS, only top top_k will be kept.
max_num (int) – if there are more than max_num bboxes after NMS, only top max_num will be kept. If -1, keep all the bboxes. Default: -1.
- Returns
- (dets, labels, coefficients), tensors of shape (k, 5), (k, 1),
and (k, coeffs_dim). Dets are boxes with scores. Labels are 0-based.
- Return type
tuple
- mmdet.core.post_processing.mask_matrix_nms(masks, labels, scores, filter_thr=- 1, nms_pre=- 1, max_num=- 1, kernel='gaussian', sigma=2.0, mask_area=None)[source]¶
Matrix NMS for multi-class masks.
- Parameters
masks (Tensor) – Has shape (num_instances, h, w)
labels (Tensor) – Labels of corresponding masks, has shape (num_instances,).
scores (Tensor) – Mask scores of corresponding masks, has shape (num_instances).
filter_thr (float) – Score threshold to filter the masks after matrix nms. Default: -1, which means do not use filter_thr.
nms_pre (int) – The max number of instances to do the matrix nms. Default: -1, which means do not use nms_pre.
max_num (int, optional) – If there are more than max_num masks after matrix, only top max_num will be kept. Default: -1, which means do not use max_num.
kernel (str) – ‘linear’ or ‘gaussian’.
sigma (float) – std in gaussian method.
mask_area (Tensor) – The sum of seg_masks.
- Returns
Processed mask results.
scores (Tensor): Updated scores, has shape (n,).
labels (Tensor): Remained labels, has shape (n,).
masks (Tensor): Remained masks, has shape (n, w, h).
- keep_inds (Tensor): The indices number of
the remaining mask in the input mask, has shape (n,).
- Return type
tuple(Tensor)
- mmdet.core.post_processing.merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg)[source]¶
Merge augmented detection bboxes and scores.
- Parameters
aug_bboxes (list[Tensor]) – shape (n, 4*#class)
aug_scores (list[Tensor] or None) – shape (n, #class)
img_shapes (list[Tensor]) – shape (3, ).
rcnn_test_cfg (dict) – rcnn test config.
- Returns
(bboxes, scores)
- Return type
tuple
- mmdet.core.post_processing.merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None)[source]¶
Merge augmented mask prediction.
- Parameters
aug_masks (list[ndarray]) – shape (n, #class, h, w)
img_shapes (list[ndarray]) – shape (3, ).
rcnn_test_cfg (dict) – rcnn test config.
- Returns
(bboxes, scores)
- Return type
tuple
- mmdet.core.post_processing.merge_aug_proposals(aug_proposals, img_metas, cfg)[source]¶
Merge augmented proposals (multiscale, flip, etc.)
- Parameters
aug_proposals (list[Tensor]) – proposals from different testing schemes, shape (n, 5). Note that they are not rescaled to the original image size.
img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
cfg (dict) – rpn test config.
- Returns
shape (n, 4), proposals corresponding to original image scale.
- Return type
Tensor
- mmdet.core.post_processing.multiclass_nms(multi_bboxes, multi_scores, score_thr, nms_cfg, max_num=- 1, score_factors=None, return_inds=False)[source]¶
NMS for multi-class bboxes.
- Parameters
multi_bboxes (Tensor) – shape (n, #class*4) or (n, 4)
multi_scores (Tensor) – shape (n, #class), where the last column contains scores of the background class, but this will be ignored.
score_thr (float) – bbox threshold, bboxes with scores lower than it will not be considered.
nms_cfg (dict) – a dict that contains the arguments of nms operations
max_num (int, optional) – if there are more than max_num bboxes after NMS, only top max_num will be kept. Default to -1.
score_factors (Tensor, optional) – The factors multiplied to scores before applying NMS. Default to None.
return_inds (bool, optional) – Whether return the indices of kept bboxes. Default to False.
- Returns
- (dets, labels, indices (optional)), tensors of shape (k, 5),
(k), and (k). Dets are boxes with scores. Labels are 0-based.
- Return type
tuple
utils¶
mmdet.models¶
detectors¶
backbones¶
- class mmdet.models.backbones.CSPDarknet(arch='P5', deepen_factor=1.0, widen_factor=1.0, out_indices=(2, 3, 4), frozen_stages=- 1, use_depthwise=False, arch_ovewrite=None, spp_kernal_sizes=(5, 9, 13), conv_cfg=None, norm_cfg={'eps': 0.001, 'momentum': 0.03, 'type': 'BN'}, act_cfg={'type': 'Swish'}, norm_eval=False, init_cfg={'a': 2.23606797749979, 'distribution': 'uniform', 'layer': 'Conv2d', 'mode': 'fan_in', 'nonlinearity': 'leaky_relu', 'type': 'Kaiming'})[source]¶
CSP-Darknet backbone used in YOLOv5 and YOLOX.
- Parameters
arch (str) – Architecture of CSP-Darknet, from {P5, P6}. Default: P5.
deepen_factor (float) – Depth multiplier, multiply number of blocks in CSP layer by this amount. Default: 1.0.
widen_factor (float) – Width multiplier, multiply number of channels in each layer by this amount. Default: 1.0.
out_indices (Sequence[int]) – Output from which stages. Default: (2, 3, 4).
frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.
use_depthwise (bool) – Whether to use depthwise separable convolution. Default: False.
arch_ovewrite (list) – Overwrite default arch settings. Default: None.
spp_kernal_sizes – (tuple[int]): Sequential of kernel sizes of SPP layers. Default: (5, 9, 13).
conv_cfg (dict) – Config dict for convolution layer. Default: None.
norm_cfg (dict) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True).
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None.
Example
>>> from mmdet.models import CSPDarknet >>> import torch >>> self = CSPDarknet(depth=53) >>> self.eval() >>> inputs = torch.rand(1, 3, 416, 416) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) ... (1, 256, 52, 52) (1, 512, 26, 26) (1, 1024, 13, 13)
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- train(mode=True)[source]¶
Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Parameters
mode (bool) – whether to set training mode (
True
) or evaluation mode (False
). Default:True
.- Returns
self
- Return type
Module
- class mmdet.models.backbones.Darknet(depth=53, out_indices=(3, 4, 5), frozen_stages=- 1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, act_cfg={'negative_slope': 0.1, 'type': 'LeakyReLU'}, norm_eval=True, pretrained=None, init_cfg=None)[source]¶
Darknet backbone.
- Parameters
depth (int) – Depth of Darknet. Currently only support 53.
out_indices (Sequence[int]) – Output from which stages.
frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.
conv_cfg (dict) – Config dict for convolution layer. Default: None.
norm_cfg (dict) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True)
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
pretrained (str, optional) – model pretrained path. Default: None
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
Example
>>> from mmdet.models import Darknet >>> import torch >>> self = Darknet(depth=53) >>> self.eval() >>> inputs = torch.rand(1, 3, 416, 416) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) ... (1, 256, 52, 52) (1, 512, 26, 26) (1, 1024, 13, 13)
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- static make_conv_res_block(in_channels, out_channels, res_repeat, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, act_cfg={'negative_slope': 0.1, 'type': 'LeakyReLU'})[source]¶
In Darknet backbone, ConvLayer is usually followed by ResBlock. This function will make that. The Conv layers always have 3x3 filters with stride=2. The number of the filters in Conv layer is the same as the out channels of the ResBlock.
- Parameters
in_channels (int) – The number of input channels.
out_channels (int) – The number of output channels.
res_repeat (int) – The number of ResBlocks.
conv_cfg (dict) – Config dict for convolution layer. Default: None.
norm_cfg (dict) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True)
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).
- train(mode=True)[source]¶
Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Parameters
mode (bool) – whether to set training mode (
True
) or evaluation mode (False
). Default:True
.- Returns
self
- Return type
Module
- class mmdet.models.backbones.DetectoRS_ResNeXt(groups=1, base_width=4, **kwargs)[source]¶
ResNeXt backbone for DetectoRS.
- Parameters
groups (int) – The number of groups in ResNeXt.
base_width (int) – The base width of ResNeXt.
- class mmdet.models.backbones.DetectoRS_ResNet(sac=None, stage_with_sac=(False, False, False, False), rfp_inplanes=None, output_img=False, pretrained=None, init_cfg=None, **kwargs)[source]¶
ResNet backbone for DetectoRS.
- Parameters
sac (dict, optional) – Dictionary to construct SAC (Switchable Atrous Convolution). Default: None.
stage_with_sac (list) – Which stage to use sac. Default: (False, False, False, False).
rfp_inplanes (int, optional) – The number of channels from RFP. Default: None. If specified, an additional conv layer will be added for
rfp_feat
. Otherwise, the structure is the same as base class.output_img (bool) – If
True
, the input image will be inserted into the starting position of output. Default: False.
- class mmdet.models.backbones.EfficientNet(arch='b0', drop_path_rate=0.0, out_indices=(6), frozen_stages=0, conv_cfg={'type': 'Conv2dAdaptivePadding'}, norm_cfg={'eps': 0.001, 'type': 'BN'}, act_cfg={'type': 'Swish'}, norm_eval=False, with_cp=False, init_cfg=[{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Constant', 'layer': ['_BatchNorm', 'GroupNorm'], 'val': 1}])[source]¶
EfficientNet backbone.
- Parameters
arch (str) – Architecture of efficientnet. Defaults to b0.
out_indices (Sequence[int]) – Output from which stages. Defaults to (6, ).
frozen_stages (int) – Stages to be frozen (all param fixed). Defaults to 0, which means not freezing any parameters.
conv_cfg (dict) – Config dict for convolution layer. Defaults to None, which means using conv2d.
norm_cfg (dict) – Config dict for normalization layer. Defaults to dict(type=’BN’).
act_cfg (dict) – Config dict for activation layer. Defaults to dict(type=’Swish’).
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Defaults to False.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Defaults to False.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- train(mode=True)[source]¶
Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Parameters
mode (bool) – whether to set training mode (
True
) or evaluation mode (False
). Default:True
.- Returns
self
- Return type
Module
- class mmdet.models.backbones.HRNet(extra, in_channels=3, conv_cfg=None, norm_cfg={'type': 'BN'}, norm_eval=True, with_cp=False, zero_init_residual=False, multiscale_output=True, pretrained=None, init_cfg=None)[source]¶
HRNet backbone.
High-Resolution Representations for Labeling Pixels and Regions arXiv:.
- Parameters
extra (dict) –
Detailed configuration for each stage of HRNet. There must be 4 stages, the configuration for each stage must have 5 keys:
num_modules(int): The number of HRModule in this stage.
num_branches(int): The number of branches in the HRModule.
block(str): The type of convolution block.
- num_blocks(tuple): The number of blocks in each branch.
The length must be equal to num_branches.
- num_channels(tuple): The number of channels in each branch.
The length must be equal to num_branches.
in_channels (int) – Number of input image channels. Default: 3.
conv_cfg (dict) – Dictionary to construct and config conv layer.
norm_cfg (dict) – Dictionary to construct and config norm layer.
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: True.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.
zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: False.
multiscale_output (bool) – Whether to output multi-level features produced by multiple branches. If False, only the first level feature will be output. Default: True.
pretrained (str, optional) – Model pretrained path. Default: None.
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None.
Example
>>> from mmdet.models import HRNet >>> import torch >>> extra = dict( >>> stage1=dict( >>> num_modules=1, >>> num_branches=1, >>> block='BOTTLENECK', >>> num_blocks=(4, ), >>> num_channels=(64, )), >>> stage2=dict( >>> num_modules=1, >>> num_branches=2, >>> block='BASIC', >>> num_blocks=(4, 4), >>> num_channels=(32, 64)), >>> stage3=dict( >>> num_modules=4, >>> num_branches=3, >>> block='BASIC', >>> num_blocks=(4, 4, 4), >>> num_channels=(32, 64, 128)), >>> stage4=dict( >>> num_modules=3, >>> num_branches=4, >>> block='BASIC', >>> num_blocks=(4, 4, 4, 4), >>> num_channels=(32, 64, 128, 256))) >>> self = HRNet(extra, in_channels=1) >>> self.eval() >>> inputs = torch.rand(1, 1, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 32, 8, 8) (1, 64, 4, 4) (1, 128, 2, 2) (1, 256, 1, 1)
- property norm1¶
the normalization layer named “norm1”
- Type
nn.Module
- property norm2¶
the normalization layer named “norm2”
- Type
nn.Module
- class mmdet.models.backbones.HourglassNet(downsample_times=5, num_stacks=2, stage_channels=(256, 256, 384, 384, 384, 512), stage_blocks=(2, 2, 2, 2, 2, 4), feat_channel=256, norm_cfg={'requires_grad': True, 'type': 'BN'}, pretrained=None, init_cfg=None)[source]¶
HourglassNet backbone.
Stacked Hourglass Networks for Human Pose Estimation. More details can be found in the paper .
- Parameters
downsample_times (int) – Downsample times in a HourglassModule.
num_stacks (int) – Number of HourglassModule modules stacked, 1 for Hourglass-52, 2 for Hourglass-104.
stage_channels (list[int]) – Feature channel of each sub-module in a HourglassModule.
stage_blocks (list[int]) – Number of sub-modules stacked in a HourglassModule.
feat_channel (int) – Feature channel of conv after a HourglassModule.
norm_cfg (dict) – Dictionary to construct and config norm layer.
pretrained (str, optional) – model pretrained path. Default: None
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
Example
>>> from mmdet.models import HourglassNet >>> import torch >>> self = HourglassNet() >>> self.eval() >>> inputs = torch.rand(1, 3, 511, 511) >>> level_outputs = self.forward(inputs) >>> for level_output in level_outputs: ... print(tuple(level_output.shape)) (1, 256, 128, 128) (1, 256, 128, 128)
- class mmdet.models.backbones.MobileNetV2(widen_factor=1.0, out_indices=(1, 2, 4, 7), frozen_stages=- 1, conv_cfg=None, norm_cfg={'type': 'BN'}, act_cfg={'type': 'ReLU6'}, norm_eval=False, with_cp=False, pretrained=None, init_cfg=None)[source]¶
MobileNetV2 backbone.
- Parameters
widen_factor (float) – Width multiplier, multiply number of channels in each layer by this amount. Default: 1.0.
out_indices (Sequence[int], optional) – Output from which stages. Default: (1, 2, 4, 7).
frozen_stages (int) – Stages to be frozen (all param fixed). Default: -1, which means not freezing any parameters.
conv_cfg (dict, optional) – Config dict for convolution layer. Default: None, which means using conv2d.
norm_cfg (dict) – Config dict for normalization layer. Default: dict(type=’BN’).
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’ReLU6’).
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.
pretrained (str, optional) – model pretrained path. Default: None
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
- make_layer(out_channels, num_blocks, stride, expand_ratio)[source]¶
Stack InvertedResidual blocks to build a layer for MobileNetV2.
- Parameters
out_channels (int) – out_channels of block.
num_blocks (int) – number of blocks.
stride (int) – stride of the first block. Default: 1
expand_ratio (int) – Expand the number of channels of the hidden layer in InvertedResidual by this ratio. Default: 6.
- class mmdet.models.backbones.PyramidVisionTransformer(pretrain_img_size=224, in_channels=3, embed_dims=64, num_stages=4, num_layers=[3, 4, 6, 3], num_heads=[1, 2, 5, 8], patch_sizes=[4, 2, 2, 2], strides=[4, 2, 2, 2], paddings=[0, 0, 0, 0], sr_ratios=[8, 4, 2, 1], out_indices=(0, 1, 2, 3), mlp_ratios=[8, 8, 4, 4], qkv_bias=True, drop_rate=0.0, attn_drop_rate=0.0, drop_path_rate=0.1, use_abs_pos_embed=True, norm_after_stage=False, use_conv_ffn=False, act_cfg={'type': 'GELU'}, norm_cfg={'eps': 1e-06, 'type': 'LN'}, pretrained=None, convert_weights=True, init_cfg=None)[source]¶
Pyramid Vision Transformer (PVT)
Implementation of Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions.
- Parameters
pretrain_img_size (int | tuple[int]) – The size of input image when pretrain. Defaults: 224.
in_channels (int) – Number of input channels. Default: 3.
embed_dims (int) – Embedding dimension. Default: 64.
num_stags (int) – The num of stages. Default: 4.
num_layers (Sequence[int]) – The layer number of each transformer encode layer. Default: [3, 4, 6, 3].
num_heads (Sequence[int]) – The attention heads of each transformer encode layer. Default: [1, 2, 5, 8].
patch_sizes (Sequence[int]) – The patch_size of each patch embedding. Default: [4, 2, 2, 2].
strides (Sequence[int]) – The stride of each patch embedding. Default: [4, 2, 2, 2].
paddings (Sequence[int]) – The padding of each patch embedding. Default: [0, 0, 0, 0].
sr_ratios (Sequence[int]) – The spatial reduction rate of each transformer encode layer. Default: [8, 4, 2, 1].
out_indices (Sequence[int] | int) – Output from which stages. Default: (0, 1, 2, 3).
mlp_ratios (Sequence[int]) – The ratio of the mlp hidden dim to the embedding dim of each transformer encode layer. Default: [8, 8, 4, 4].
qkv_bias (bool) – Enable bias for qkv if True. Default: True.
drop_rate (float) – Probability of an element to be zeroed. Default 0.0.
attn_drop_rate (float) – The drop out rate for attention layer. Default 0.0.
drop_path_rate (float) – stochastic depth rate. Default 0.1.
use_abs_pos_embed (bool) – If True, add absolute position embedding to the patch embedding. Defaults: True.
use_conv_ffn (bool) – If True, use Convolutional FFN to replace FFN. Default: False.
act_cfg (dict) – The activation config for FFNs. Default: dict(type=’GELU’).
norm_cfg (dict) – Config dict for normalization layer. Default: dict(type=’LN’).
pretrained (str, optional) – model pretrained path. Default: None.
convert_weights (bool) – The flag indicates whether the pre-trained model is from the original repo. We may need to convert some keys to make it compatible. Default: True.
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.backbones.PyramidVisionTransformerV2(**kwargs)[source]¶
Implementation of PVTv2: Improved Baselines with Pyramid Vision Transformer.
- class mmdet.models.backbones.RegNet(arch, in_channels=3, stem_channels=32, base_channels=32, strides=(2, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=- 1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, with_cp=False, zero_init_residual=True, pretrained=None, init_cfg=None)[source]¶
RegNet backbone.
More details can be found in paper .
- Parameters
arch (dict) –
The parameter of RegNets.
w0 (int): initial width
wa (float): slope of width
wm (float): quantization parameter to quantize the width
depth (int): depth of the backbone
group_w (int): width of group
bot_mul (float): bottleneck ratio, i.e. expansion of bottleneck.
strides (Sequence[int]) – Strides of the first block of each stage.
base_channels (int) – Base channels after stem layer.
in_channels (int) – Number of input image channels. Default: 3.
dilations (Sequence[int]) – Dilation of each stage.
out_indices (Sequence[int]) – Output from which stages.
style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters.
norm_cfg (dict) – dictionary to construct and config norm layer.
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.
pretrained (str, optional) – model pretrained path. Default: None
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
Example
>>> from mmdet.models import RegNet >>> import torch >>> self = RegNet( arch=dict( w0=88, wa=26.31, wm=2.25, group_w=48, depth=25, bot_mul=1.0)) >>> self.eval() >>> inputs = torch.rand(1, 3, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 96, 8, 8) (1, 192, 4, 4) (1, 432, 2, 2) (1, 1008, 1, 1)
- adjust_width_group(widths, bottleneck_ratio, groups)[source]¶
Adjusts the compatibility of widths and groups.
- Parameters
widths (list[int]) – Width of each stage.
bottleneck_ratio (float) – Bottleneck ratio.
groups (int) – number of groups in each stage
- Returns
The adjusted widths and groups of each stage.
- Return type
tuple(list)
- generate_regnet(initial_width, width_slope, width_parameter, depth, divisor=8)[source]¶
Generates per block width from RegNet parameters.
- Parameters
initial_width ([int]) – Initial width of the backbone
width_slope ([float]) – Slope of the quantized linear function
width_parameter ([int]) – Parameter used to quantize the width.
depth ([int]) – Depth of the backbone.
divisor (int, optional) – The divisor of channels. Defaults to 8.
- Returns
return a list of widths of each stage and the number of stages
- Return type
list, int
- class mmdet.models.backbones.Res2Net(scales=4, base_width=26, style='pytorch', deep_stem=True, avg_down=True, pretrained=None, init_cfg=None, **kwargs)[source]¶
Res2Net backbone.
- Parameters
scales (int) – Scales used in Res2Net. Default: 4
base_width (int) – Basic width of each scale. Default: 26
depth (int) – Depth of res2net, from {50, 101, 152}.
in_channels (int) – Number of input image channels. Default: 3.
num_stages (int) – Res2net stages. Default: 4.
strides (Sequence[int]) – Strides of the first block of each stage.
dilations (Sequence[int]) – Dilation of each stage.
out_indices (Sequence[int]) – Output from which stages.
style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv
avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottle2neck.
frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.
norm_cfg (dict) – Dictionary to construct and config norm layer.
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
plugins (list[dict]) –
List of plugins for stages, each dict contains:
cfg (dict, required): Cfg dict to build plugin.
position (str, required): Position inside block to insert plugin, options are ‘after_conv1’, ‘after_conv2’, ‘after_conv3’.
stages (tuple[bool], optional): Stages to apply plugin, length should be same as ‘num_stages’.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.
pretrained (str, optional) – model pretrained path. Default: None
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
Example
>>> from mmdet.models import Res2Net >>> import torch >>> self = Res2Net(depth=50, scales=4, base_width=26) >>> self.eval() >>> inputs = torch.rand(1, 3, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 256, 8, 8) (1, 512, 4, 4) (1, 1024, 2, 2) (1, 2048, 1, 1)
- class mmdet.models.backbones.ResNeSt(groups=1, base_width=4, radix=2, reduction_factor=4, avg_down_stride=True, **kwargs)[source]¶
ResNeSt backbone.
- Parameters
groups (int) – Number of groups of Bottleneck. Default: 1
base_width (int) – Base width of Bottleneck. Default: 4
radix (int) – Radix of SplitAttentionConv2d. Default: 2
reduction_factor (int) – Reduction factor of inter_channels in SplitAttentionConv2d. Default: 4.
avg_down_stride (bool) – Whether to use average pool for stride in Bottleneck. Default: True.
kwargs (dict) – Keyword arguments for ResNet.
- class mmdet.models.backbones.ResNeXt(groups=1, base_width=4, **kwargs)[source]¶
ResNeXt backbone.
- Parameters
depth (int) – Depth of resnet, from {18, 34, 50, 101, 152}.
in_channels (int) – Number of input image channels. Default: 3.
num_stages (int) – Resnet stages. Default: 4.
groups (int) – Group of resnext.
base_width (int) – Base width of resnext.
strides (Sequence[int]) – Strides of the first block of each stage.
dilations (Sequence[int]) – Dilation of each stage.
out_indices (Sequence[int]) – Output from which stages.
style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters.
norm_cfg (dict) – dictionary to construct and config norm layer.
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.
- class mmdet.models.backbones.ResNet(depth, in_channels=3, stem_channels=None, base_channels=64, num_stages=4, strides=(1, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=- 1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, with_cp=False, zero_init_residual=True, pretrained=None, init_cfg=None)[source]¶
ResNet backbone.
- Parameters
depth (int) – Depth of resnet, from {18, 34, 50, 101, 152}.
stem_channels (int | None) – Number of stem channels. If not specified, it will be the same as base_channels. Default: None.
base_channels (int) – Number of base channels of res layer. Default: 64.
in_channels (int) – Number of input image channels. Default: 3.
num_stages (int) – Resnet stages. Default: 4.
strides (Sequence[int]) – Strides of the first block of each stage.
dilations (Sequence[int]) – Dilation of each stage.
out_indices (Sequence[int]) – Output from which stages.
style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv
avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck.
frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.
norm_cfg (dict) – Dictionary to construct and config norm layer.
norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
plugins (list[dict]) –
List of plugins for stages, each dict contains:
cfg (dict, required): Cfg dict to build plugin.
position (str, required): Position inside block to insert plugin, options are ‘after_conv1’, ‘after_conv2’, ‘after_conv3’.
stages (tuple[bool], optional): Stages to apply plugin, length should be same as ‘num_stages’.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.
pretrained (str, optional) – model pretrained path. Default: None
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
Example
>>> from mmdet.models import ResNet >>> import torch >>> self = ResNet(depth=18) >>> self.eval() >>> inputs = torch.rand(1, 3, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 64, 8, 8) (1, 128, 4, 4) (1, 256, 2, 2) (1, 512, 1, 1)
- make_stage_plugins(plugins, stage_idx)[source]¶
Make plugins for ResNet
stage_idx
th stage.Currently we support to insert
context_block
,empirical_attention_block
,nonlocal_block
into the backbone like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of Bottleneck.An example of plugins format could be:
Examples
>>> plugins=[ ... dict(cfg=dict(type='xxx', arg1='xxx'), ... stages=(False, True, True, True), ... position='after_conv2'), ... dict(cfg=dict(type='yyy'), ... stages=(True, True, True, True), ... position='after_conv3'), ... dict(cfg=dict(type='zzz', postfix='1'), ... stages=(True, True, True, True), ... position='after_conv3'), ... dict(cfg=dict(type='zzz', postfix='2'), ... stages=(True, True, True, True), ... position='after_conv3') ... ] >>> self = ResNet(depth=18) >>> stage_plugins = self.make_stage_plugins(plugins, 0) >>> assert len(stage_plugins) == 3
Suppose
stage_idx=0
, the structure of blocks in the stage would be:conv1-> conv2->conv3->yyy->zzz1->zzz2
Suppose ‘stage_idx=1’, the structure of blocks in the stage would be:
conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2
If stages is missing, the plugin would be applied to all stages.
- Parameters
plugins (list[dict]) – List of plugins cfg to build. The postfix is required if multiple same type plugins are inserted.
stage_idx (int) – Index of stage to build
- Returns
Plugins for current stage
- Return type
list[dict]
- property norm1¶
the normalization layer named “norm1”
- Type
nn.Module
- class mmdet.models.backbones.ResNetV1d(**kwargs)[source]¶
ResNetV1d variant described in Bag of Tricks.
Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in the input stem with three 3x3 convs. And in the downsampling block, a 2x2 avg_pool with stride 2 is added before conv, whose stride is changed to 1.
- class mmdet.models.backbones.SSDVGG(depth, with_last_pool=False, ceil_mode=True, out_indices=(3, 4), out_feature_indices=(22, 34), pretrained=None, init_cfg=None, input_size=None, l2_norm_scale=None)[source]¶
VGG Backbone network for single-shot-detection.
- Parameters
depth (int) – Depth of vgg, from {11, 13, 16, 19}.
with_last_pool (bool) – Whether to add a pooling layer at the last of the model
ceil_mode (bool) – When True, will use ceil instead of floor to compute the output shape.
out_indices (Sequence[int]) – Output from which stages.
out_feature_indices (Sequence[int]) – Output from which feature map.
pretrained (str, optional) – model pretrained path. Default: None
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
input_size (int, optional) – Deprecated argumment. Width and height of input, from {300, 512}.
l2_norm_scale (float, optional) – Deprecated argumment. L2 normalization layer init scale.
Example
>>> self = SSDVGG(input_size=300, depth=11) >>> self.eval() >>> inputs = torch.rand(1, 3, 300, 300) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 1024, 19, 19) (1, 512, 10, 10) (1, 256, 5, 5) (1, 256, 3, 3) (1, 256, 1, 1)
- class mmdet.models.backbones.SwinTransformer(pretrain_img_size=224, in_channels=3, embed_dims=96, patch_size=4, window_size=7, mlp_ratio=4, depths=(2, 2, 6, 2), num_heads=(3, 6, 12, 24), strides=(4, 2, 2, 2), out_indices=(0, 1, 2, 3), qkv_bias=True, qk_scale=None, patch_norm=True, drop_rate=0.0, attn_drop_rate=0.0, drop_path_rate=0.1, use_abs_pos_embed=False, act_cfg={'type': 'GELU'}, norm_cfg={'type': 'LN'}, with_cp=False, pretrained=None, convert_weights=False, frozen_stages=- 1, init_cfg=None)[source]¶
Swin Transformer A PyTorch implement of : Swin Transformer: Hierarchical Vision Transformer using Shifted Windows -
Inspiration from https://github.com/microsoft/Swin-Transformer
- Parameters
pretrain_img_size (int | tuple[int]) – The size of input image when pretrain. Defaults: 224.
in_channels (int) – The num of input channels. Defaults: 3.
embed_dims (int) – The feature dimension. Default: 96.
patch_size (int | tuple[int]) – Patch size. Default: 4.
window_size (int) – Window size. Default: 7.
mlp_ratio (int) – Ratio of mlp hidden dim to embedding dim. Default: 4.
depths (tuple[int]) – Depths of each Swin Transformer stage. Default: (2, 2, 6, 2).
num_heads (tuple[int]) – Parallel attention heads of each Swin Transformer stage. Default: (3, 6, 12, 24).
strides (tuple[int]) – The patch merging or patch embedding stride of each Swin Transformer stage. (In swin, we set kernel size equal to stride.) Default: (4, 2, 2, 2).
out_indices (tuple[int]) – Output from which stages. Default: (0, 1, 2, 3).
qkv_bias (bool, optional) – If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional) – Override default qk scale of head_dim ** -0.5 if set. Default: None.
patch_norm (bool) – If add a norm layer for patch embed and patch merging. Default: True.
drop_rate (float) – Dropout rate. Defaults: 0.
attn_drop_rate (float) – Attention dropout rate. Default: 0.
drop_path_rate (float) – Stochastic depth rate. Defaults: 0.1.
use_abs_pos_embed (bool) – If True, add absolute position embedding to the patch embedding. Defaults: False.
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’GELU’).
norm_cfg (dict) – Config dict for normalization layer at output of backone. Defaults: dict(type=’LN’).
with_cp (bool, optional) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.
pretrained (str, optional) – model pretrained path. Default: None.
convert_weights (bool) – The flag indicates whether the pre-trained model is from the original repo. We may need to convert some keys to make it compatible. Default: False.
frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). Default: -1 (-1 means not freezing any parameters).
init_cfg (dict, optional) – The Config for initialization. Defaults to None.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.backbones.TridentResNet(depth, num_branch, test_branch_idx, trident_dilations, **kwargs)[source]¶
The stem layer, stage 1 and stage 2 in Trident ResNet are identical to ResNet, while in stage 3, Trident BottleBlock is utilized to replace the normal BottleBlock to yield trident output. Different branch shares the convolution weight but uses different dilations to achieve multi-scale output.
/ stage3(b0) x - stem - stage1 - stage2 - stage3(b1) - output stage3(b2) /
- Parameters
depth (int) – Depth of resnet, from {50, 101, 152}.
num_branch (int) – Number of branches in TridentNet.
test_branch_idx (int) – In inference, all 3 branches will be used if test_branch_idx==-1, otherwise only branch with index test_branch_idx will be used.
trident_dilations (tuple[int]) – Dilations of different trident branch. len(trident_dilations) should be equal to num_branch.
necks¶
- class mmdet.models.necks.BFP(Balanced Feature Pyramids)[source]¶
BFP takes multi-level features as inputs and gather them into a single one, then refine the gathered feature and scatter the refined results to multi-level features. This module is used in Libra R-CNN (CVPR 2019), see the paper Libra R-CNN: Towards Balanced Learning for Object Detection for details.
- Parameters
in_channels (int) – Number of input channels (feature maps of all levels should have the same channels).
num_levels (int) – Number of input feature levels.
conv_cfg (dict) – The config dict for convolution layers.
norm_cfg (dict) – The config dict for normalization layers.
refine_level (int) – Index of integration and refine level of BSF in multi-level features from bottom to top.
refine_type (str) – Type of the refine op, currently support [None, ‘conv’, ‘non_local’].
init_cfg (dict or list[dict], optional) – Initialization config dict.
- class mmdet.models.necks.CTResNetNeck(in_channel, num_deconv_filters, num_deconv_kernels, use_dcn=True, init_cfg=None)[source]¶
The neck used in CenterNet for object classification and box regression.
- Parameters
in_channel (int) – Number of input channels.
num_deconv_filters (tuple[int]) – Number of filters per stage.
num_deconv_kernels (tuple[int]) – Number of kernels per stage.
use_dcn (bool) – If True, use DCNv2. Default: True.
init_cfg (dict or list[dict], optional) – Initialization config dict.
- forward(inputs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.necks.ChannelMapper(in_channels, out_channels, kernel_size=3, conv_cfg=None, norm_cfg=None, act_cfg={'type': 'ReLU'}, num_outs=None, init_cfg={'distribution': 'uniform', 'layer': 'Conv2d', 'type': 'Xavier'})[source]¶
Channel Mapper to reduce/increase channels of backbone features.
This is used to reduce/increase channels of backbone features.
- Parameters
in_channels (List[int]) – Number of input channels per scale.
out_channels (int) – Number of output channels (used at each scale).
kernel_size (int, optional) – kernel_size for reducing channels (used at each scale). Default: 3.
conv_cfg (dict, optional) – Config dict for convolution layer. Default: None.
norm_cfg (dict, optional) – Config dict for normalization layer. Default: None.
act_cfg (dict, optional) – Config dict for activation layer in ConvModule. Default: dict(type=’ReLU’).
num_outs (int, optional) – Number of output feature maps. There would be extra_convs when num_outs larger than the length of in_channels.
init_cfg (dict or list[dict], optional) – Initialization config dict.
Example
>>> import torch >>> in_channels = [2, 3, 5, 7] >>> scales = [340, 170, 84, 43] >>> inputs = [torch.rand(1, c, s, s) ... for c, s in zip(in_channels, scales)] >>> self = ChannelMapper(in_channels, 11, 3).eval() >>> outputs = self.forward(inputs) >>> for i in range(len(outputs)): ... print(f'outputs[{i}].shape = {outputs[i].shape}') outputs[0].shape = torch.Size([1, 11, 340, 340]) outputs[1].shape = torch.Size([1, 11, 170, 170]) outputs[2].shape = torch.Size([1, 11, 84, 84]) outputs[3].shape = torch.Size([1, 11, 43, 43])
- class mmdet.models.necks.DilatedEncoder(in_channels, out_channels, block_mid_channels, num_residual_blocks, block_dilations)[source]¶
Dilated Encoder for YOLOF <https://arxiv.org/abs/2103.09460>`.
- This module contains two types of components:
- the original FPN lateral convolution layer and fpn convolution layer,
which are 1x1 conv + 3x3 conv
the dilated residual block
- Parameters
in_channels (int) – The number of input channels.
out_channels (int) – The number of output channels.
block_mid_channels (int) – The number of middle block output channels
num_residual_blocks (int) – The number of residual blocks.
block_dilations (list) – The list of residual blocks dilation.
- forward(feature)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.necks.DyHead(in_channels, out_channels, num_blocks=6, zero_init_offset=True, init_cfg=None)[source]¶
DyHead neck consisting of multiple DyHead Blocks.
See Dynamic Head: Unifying Object Detection Heads with Attentions for details.
- Parameters
in_channels (int) – Number of input channels.
out_channels (int) – Number of output channels.
num_blocks (int, optional) – Number of DyHead Blocks. Default: 6.
zero_init_offset (bool, optional) – Whether to use zero init for spatial_conv_offset. Default: True.
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None.
- class mmdet.models.necks.FPG(in_channels, out_channels, num_outs, stack_times, paths, inter_channels=None, same_down_trans=None, same_up_trans={'kernel_size': 3, 'padding': 1, 'stride': 2, 'type': 'conv'}, across_lateral_trans={'kernel_size': 1, 'type': 'conv'}, across_down_trans={'kernel_size': 3, 'type': 'conv'}, across_up_trans=None, across_skip_trans={'type': 'identity'}, output_trans={'kernel_size': 3, 'type': 'last_conv'}, start_level=0, end_level=- 1, add_extra_convs=False, norm_cfg=None, skip_inds=None, init_cfg=[{'type': 'Caffe2Xavier', 'layer': 'Conv2d'}, {'type': 'Constant', 'layer': ['_BatchNorm', '_InstanceNorm', 'GroupNorm', 'LayerNorm'], 'val': 1.0}])[source]¶
FPG.
Implementation of Feature Pyramid Grids (FPG). This implementation only gives the basic structure stated in the paper. But users can implement different type of transitions to fully explore the the potential power of the structure of FPG.
- Parameters
in_channels (int) – Number of input channels (feature maps of all levels should have the same channels).
out_channels (int) – Number of output channels (used at each scale)
num_outs (int) – Number of output scales.
stack_times (int) – The number of times the pyramid architecture will be stacked.
paths (list[str]) – Specify the path order of each stack level. Each element in the list should be either ‘bu’ (bottom-up) or ‘td’ (top-down).
inter_channels (int) – Number of inter channels.
same_up_trans (dict) – Transition that goes down at the same stage.
same_down_trans (dict) – Transition that goes up at the same stage.
across_lateral_trans (dict) – Across-pathway same-stage
across_down_trans (dict) – Across-pathway bottom-up connection.
across_up_trans (dict) – Across-pathway top-down connection.
across_skip_trans (dict) – Across-pathway skip connection.
output_trans (dict) – Transition that trans the output of the last stage.
start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
add_extra_convs (bool) – It decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs.
norm_cfg (dict) – Config dict for normalization layer. Default: None.
init_cfg (dict or list[dict], optional) – Initialization config dict.
- forward(inputs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.necks.FPN(in_channels, out_channels, num_outs, start_level=0, end_level=- 1, add_extra_convs=False, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, act_cfg=None, upsample_cfg={'mode': 'nearest'}, init_cfg={'distribution': 'uniform', 'layer': 'Conv2d', 'type': 'Xavier'})[source]¶
Feature Pyramid Network.
This is an implementation of paper Feature Pyramid Networks for Object Detection.
- Parameters
in_channels (list[int]) – Number of input channels per scale.
out_channels (int) – Number of output channels (used at each scale).
num_outs (int) – Number of output scales.
start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
add_extra_convs (bool | str) –
If bool, it decides whether to add conv layers on top of the original feature maps. Default to False. If True, it is equivalent to add_extra_convs=’on_input’. If str, it specifies the source feature map of the extra convs. Only the following options are allowed
’on_input’: Last feat map of neck inputs (i.e. backbone feature).
’on_lateral’: Last feature map after lateral convs.
’on_output’: The last output feature map after fpn convs.
relu_before_extra_convs (bool) – Whether to apply relu before the extra conv. Default: False.
no_norm_on_lateral (bool) – Whether to apply norm on lateral. Default: False.
conv_cfg (dict) – Config dict for convolution layer. Default: None.
norm_cfg (dict) – Config dict for normalization layer. Default: None.
act_cfg (dict) – Config dict for activation layer in ConvModule. Default: None.
upsample_cfg (dict) – Config dict for interpolate layer. Default: dict(mode=’nearest’).
init_cfg (dict or list[dict], optional) – Initialization config dict.
Example
>>> import torch >>> in_channels = [2, 3, 5, 7] >>> scales = [340, 170, 84, 43] >>> inputs = [torch.rand(1, c, s, s) ... for c, s in zip(in_channels, scales)] >>> self = FPN(in_channels, 11, len(in_channels)).eval() >>> outputs = self.forward(inputs) >>> for i in range(len(outputs)): ... print(f'outputs[{i}].shape = {outputs[i].shape}') outputs[0].shape = torch.Size([1, 11, 340, 340]) outputs[1].shape = torch.Size([1, 11, 170, 170]) outputs[2].shape = torch.Size([1, 11, 84, 84]) outputs[3].shape = torch.Size([1, 11, 43, 43])
- class mmdet.models.necks.FPN_CARAFE(in_channels, out_channels, num_outs, start_level=0, end_level=- 1, norm_cfg=None, act_cfg=None, order=('conv', 'norm', 'act'), upsample_cfg={'encoder_dilation': 1, 'encoder_kernel': 3, 'type': 'carafe', 'up_group': 1, 'up_kernel': 5}, init_cfg=None)[source]¶
FPN_CARAFE is a more flexible implementation of FPN. It allows more choice for upsample methods during the top-down pathway.
It can reproduce the performance of ICCV 2019 paper CARAFE: Content-Aware ReAssembly of FEatures Please refer to https://arxiv.org/abs/1905.02188 for more details.
- Parameters
in_channels (list[int]) – Number of channels for each input feature map.
out_channels (int) – Output channels of feature pyramids.
num_outs (int) – Number of output stages.
start_level (int) – Start level of feature pyramids. (Default: 0)
end_level (int) – End level of feature pyramids. (Default: -1 indicates the last level).
norm_cfg (dict) – Dictionary to construct and config norm layer.
activate (str) – Type of activation function in ConvModule (Default: None indicates w/o activation).
order (dict) – Order of components in ConvModule.
upsample (str) – Type of upsample layer.
upsample_cfg (dict) – Dictionary to construct and config upsample layer.
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
- class mmdet.models.necks.HRFPN(High Resolution Feature Pyramids)[source]¶
paper: High-Resolution Representations for Labeling Pixels and Regions.
- Parameters
in_channels (list) – number of channels for each branch.
out_channels (int) – output channels of feature pyramids.
num_outs (int) – number of output stages.
pooling_type (str) – pooling for generating feature pyramids from {MAX, AVG}.
conv_cfg (dict) – dictionary to construct and config conv layer.
norm_cfg (dict) – dictionary to construct and config norm layer.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
stride (int) – stride of 3x3 convolutional layers
init_cfg (dict or list[dict], optional) – Initialization config dict.
- class mmdet.models.necks.NASFCOS_FPN(in_channels, out_channels, num_outs, start_level=1, end_level=- 1, add_extra_convs=False, conv_cfg=None, norm_cfg=None, init_cfg=None)[source]¶
FPN structure in NASFPN.
Implementation of paper NAS-FCOS: Fast Neural Architecture Search for Object Detection
- Parameters
in_channels (List[int]) – Number of input channels per scale.
out_channels (int) – Number of output channels (used at each scale)
num_outs (int) – Number of output scales.
start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
add_extra_convs (bool) – It decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs.
conv_cfg (dict) – dictionary to construct and config conv layer.
norm_cfg (dict) – dictionary to construct and config norm layer.
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
- class mmdet.models.necks.NASFPN(in_channels, out_channels, num_outs, stack_times, start_level=0, end_level=- 1, add_extra_convs=False, norm_cfg=None, init_cfg={'layer': 'Conv2d', 'type': 'Caffe2Xavier'})[source]¶
NAS-FPN.
Implementation of NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection
- Parameters
in_channels (List[int]) – Number of input channels per scale.
out_channels (int) – Number of output channels (used at each scale)
num_outs (int) – Number of output scales.
stack_times (int) – The number of times the pyramid architecture will be stacked.
start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
add_extra_convs (bool) – It decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs.
init_cfg (dict or list[dict], optional) – Initialization config dict.
- class mmdet.models.necks.PAFPN(in_channels, out_channels, num_outs, start_level=0, end_level=- 1, add_extra_convs=False, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, act_cfg=None, init_cfg={'distribution': 'uniform', 'layer': 'Conv2d', 'type': 'Xavier'})[source]¶
Path Aggregation Network for Instance Segmentation.
This is an implementation of the PAFPN in Path Aggregation Network.
- Parameters
in_channels (List[int]) – Number of input channels per scale.
out_channels (int) – Number of output channels (used at each scale)
num_outs (int) – Number of output scales.
start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
add_extra_convs (bool | str) –
If bool, it decides whether to add conv layers on top of the original feature maps. Default to False. If True, it is equivalent to add_extra_convs=’on_input’. If str, it specifies the source feature map of the extra convs. Only the following options are allowed
’on_input’: Last feat map of neck inputs (i.e. backbone feature).
’on_lateral’: Last feature map after lateral convs.
’on_output’: The last output feature map after fpn convs.
relu_before_extra_convs (bool) – Whether to apply relu before the extra conv. Default: False.
no_norm_on_lateral (bool) – Whether to apply norm on lateral. Default: False.
conv_cfg (dict) – Config dict for convolution layer. Default: None.
norm_cfg (dict) – Config dict for normalization layer. Default: None.
act_cfg (str) – Config dict for activation layer in ConvModule. Default: None.
init_cfg (dict or list[dict], optional) – Initialization config dict.
- class mmdet.models.necks.RFP(Recursive Feature Pyramid)[source]¶
This is an implementation of RFP in DetectoRS. Different from standard FPN, the input of RFP should be multi level features along with origin input image of backbone.
- Parameters
rfp_steps (int) – Number of unrolled steps of RFP.
rfp_backbone (dict) – Configuration of the backbone for RFP.
aspp_out_channels (int) – Number of output channels of ASPP module.
aspp_dilations (tuple[int]) – Dilation rates of four branches. Default: (1, 3, 6, 1)
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
- class mmdet.models.necks.SSDNeck(in_channels, out_channels, level_strides, level_paddings, l2_norm_scale=20.0, last_kernel_size=3, use_depthwise=False, conv_cfg=None, norm_cfg=None, act_cfg={'type': 'ReLU'}, init_cfg=[{'type': 'Xavier', 'distribution': 'uniform', 'layer': 'Conv2d'}, {'type': 'Constant', 'val': 1, 'layer': 'BatchNorm2d'}])[source]¶
Extra layers of SSD backbone to generate multi-scale feature maps.
- Parameters
in_channels (Sequence[int]) – Number of input channels per scale.
out_channels (Sequence[int]) – Number of output channels per scale.
level_strides (Sequence[int]) – Stride of 3x3 conv per level.
level_paddings (Sequence[int]) – Padding size of 3x3 conv per level.
l2_norm_scale (float|None) – L2 normalization layer init scale. If None, not use L2 normalization on the first input feature.
last_kernel_size (int) – Kernel size of the last conv layer. Default: 3.
use_depthwise (bool) – Whether to use DepthwiseSeparableConv. Default: False.
conv_cfg (dict) – Config dict for convolution layer. Default: None.
norm_cfg (dict) – Dictionary to construct and config norm layer. Default: None.
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’ReLU’).
init_cfg (dict or list[dict], optional) – Initialization config dict.
- class mmdet.models.necks.YOLOV3Neck(num_scales, in_channels, out_channels, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, act_cfg={'negative_slope': 0.1, 'type': 'LeakyReLU'}, init_cfg=None)[source]¶
The neck of YOLOV3.
It can be treated as a simplified version of FPN. It will take the result from Darknet backbone and do some upsampling and concatenation. It will finally output the detection result.
Note
- The input feats should be from top to bottom.
i.e., from high-lvl to low-lvl
- But YOLOV3Neck will process them in reversed order.
i.e., from bottom (high-lvl) to top (low-lvl)
- Parameters
num_scales (int) – The number of scales / stages.
in_channels (List[int]) – The number of input channels per scale.
out_channels (List[int]) – The number of output channels per scale.
conv_cfg (dict, optional) – Config dict for convolution layer. Default: None.
norm_cfg (dict, optional) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True)
act_cfg (dict, optional) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
- forward(feats)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.necks.YOLOXPAFPN(in_channels, out_channels, num_csp_blocks=3, use_depthwise=False, upsample_cfg={'mode': 'nearest', 'scale_factor': 2}, conv_cfg=None, norm_cfg={'eps': 0.001, 'momentum': 0.03, 'type': 'BN'}, act_cfg={'type': 'Swish'}, init_cfg={'a': 2.23606797749979, 'distribution': 'uniform', 'layer': 'Conv2d', 'mode': 'fan_in', 'nonlinearity': 'leaky_relu', 'type': 'Kaiming'})[source]¶
Path Aggregation Network used in YOLOX.
- Parameters
in_channels (List[int]) – Number of input channels per scale.
out_channels (int) – Number of output channels (used at each scale)
num_csp_blocks (int) – Number of bottlenecks in CSPLayer. Default: 3
use_depthwise (bool) – Whether to depthwise separable convolution in blocks. Default: False
upsample_cfg (dict) – Config dict for interpolate layer. Default: dict(scale_factor=2, mode=’nearest’)
conv_cfg (dict, optional) – Config dict for convolution layer. Default: None, which means using conv2d.
norm_cfg (dict) – Config dict for normalization layer. Default: dict(type=’BN’)
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’Swish’)
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None.
dense_heads¶
roi_heads¶
losses¶
utils¶
- class mmdet.models.utils.AdaptiveAvgPool2d(output_size: Union[int, None, Tuple[Optional[int], ...]])[source]¶
Handle empty batch dimension to AdaptiveAvgPool2d.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.utils.CSPLayer(in_channels, out_channels, expand_ratio=0.5, num_blocks=1, add_identity=True, use_depthwise=False, conv_cfg=None, norm_cfg={'eps': 0.001, 'momentum': 0.03, 'type': 'BN'}, act_cfg={'type': 'Swish'}, init_cfg=None)[source]¶
Cross Stage Partial Layer.
- Parameters
in_channels (int) – The input channels of the CSP layer.
out_channels (int) – The output channels of the CSP layer.
expand_ratio (float) – Ratio to adjust the number of channels of the hidden layer. Default: 0.5
num_blocks (int) – Number of blocks. Default: 1
add_identity (bool) – Whether to add identity in blocks. Default: True
use_depthwise (bool) – Whether to depthwise separable convolution in blocks. Default: False
conv_cfg (dict, optional) – Config dict for convolution layer. Default: None, which means using conv2d.
norm_cfg (dict) – Config dict for normalization layer. Default: dict(type=’BN’)
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’Swish’)
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.utils.ConvUpsample(in_channels, inner_channels, num_layers=1, num_upsample=None, conv_cfg=None, norm_cfg=None, init_cfg=None, **kwargs)[source]¶
ConvUpsample performs 2x upsampling after Conv.
There are several ConvModule layers. In the first few layers, upsampling will be applied after each layer of convolution. The number of upsampling must be no more than the number of ConvModule layers.
- Parameters
in_channels (int) – Number of channels in the input feature map.
inner_channels (int) – Number of channels produced by the convolution.
num_layers (int) – Number of convolution layers.
num_upsample (int | optional) – Number of upsampling layer. Must be no more than num_layers. Upsampling will be applied after the first
num_upsample
layers of convolution. Default:num_layers
.conv_cfg (dict) – Config dict for convolution layer. Default: None, which means using conv2d.
norm_cfg (dict) – Config dict for normalization layer. Default: None.
init_cfg (dict) – Config dict for initialization. Default: None.
kwargs (key word augments) – Other augments used in ConvModule.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.utils.DetrTransformerDecoder(*args, post_norm_cfg={'type': 'LN'}, return_intermediate=False, **kwargs)[source]¶
Implements the decoder in DETR transformer.
- Parameters
return_intermediate (bool) – Whether to return intermediate outputs.
post_norm_cfg (dict) – Config of last normalization layer. Default: LN.
- forward(query, *args, **kwargs)[source]¶
Forward function for TransformerDecoder.
- Parameters
query (Tensor) – Input query with shape (num_query, bs, embed_dims).
- Returns
- Results with shape [1, num_query, bs, embed_dims] when
return_intermediate is False, otherwise it has shape [num_layers, num_query, bs, embed_dims].
- Return type
Tensor
- class mmdet.models.utils.DetrTransformerDecoderLayer(attn_cfgs, feedforward_channels, ffn_dropout=0.0, operation_order=None, act_cfg={'inplace': True, 'type': 'ReLU'}, norm_cfg={'type': 'LN'}, ffn_num_fcs=2, **kwargs)[source]¶
Implements decoder layer in DETR transformer.
- Parameters
attn_cfgs (list[mmcv.ConfigDict] | list[dict] | dict )) – Configs for self_attention or cross_attention, the order should be consistent with it in operation_order. If it is a dict, it would be expand to the number of attention in operation_order.
feedforward_channels (int) – The hidden dimension for FFNs.
ffn_dropout (float) – Probability of an element to be zeroed in ffn. Default 0.0.
operation_order (tuple[str]) – The execution order of operation in transformer. Such as (‘self_attn’, ‘norm’, ‘ffn’, ‘norm’). Default:None
act_cfg (dict) – The activation config for FFNs. Default: LN
norm_cfg (dict) – Config dict for normalization layer. Default: LN.
ffn_num_fcs (int) – The number of fully-connected layers in FFNs. Default:2.
- class mmdet.models.utils.DyReLU(channels, ratio=4, conv_cfg=None, act_cfg=({'type': 'ReLU'}, {'type': 'HSigmoid', 'bias': 3.0, 'divisor': 6.0}), init_cfg=None)[source]¶
Dynamic ReLU (DyReLU) module.
See Dynamic ReLU for details. Current implementation is specialized for task-aware attention in DyHead. HSigmoid arguments in default act_cfg follow DyHead official code. https://github.com/microsoft/DynamicHead/blob/master/dyhead/dyrelu.py
- Parameters
channels (int) – The input (and output) channels of DyReLU module.
ratio (int) – Squeeze ratio in Squeeze-and-Excitation-like module, the intermediate channel will be
int(channels/ratio)
. Default: 4.conv_cfg (None or dict) – Config dict for convolution layer. Default: None, which means using conv2d.
act_cfg (dict or Sequence[dict]) – Config dict for activation layer. If act_cfg is a dict, two activation layers will be configurated by this dict. If act_cfg is a sequence of dicts, the first activation layer will be configurated by the first dict and the second activation layer will be configurated by the second dict. Default: (dict(type=’ReLU’), dict(type=’HSigmoid’, bias=3.0, divisor=6.0))
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
- class mmdet.models.utils.DynamicConv(in_channels=256, feat_channels=64, out_channels=None, input_feat_shape=7, with_proj=True, act_cfg={'inplace': True, 'type': 'ReLU'}, norm_cfg={'type': 'LN'}, init_cfg=None)[source]¶
Implements Dynamic Convolution.
This module generate parameters for each sample and use bmm to implement 1*1 convolution. Code is modified from the official github repo .
- Parameters
in_channels (int) – The input feature channel. Defaults to 256.
feat_channels (int) – The inner feature channel. Defaults to 64.
out_channels (int, optional) – The output feature channel. When not specified, it will be set to in_channels by default
input_feat_shape (int) – The shape of input feature. Defaults to 7.
with_proj (bool) – Project two-dimentional feature to one-dimentional feature. Default to True.
act_cfg (dict) – The activation config for DynamicConv.
norm_cfg (dict) – Config dict for normalization layer. Default layer normalization.
(obj (init_cfg) – mmcv.ConfigDict): The Config for initialization. Default: None.
- forward(param_feature, input_feature)[source]¶
Forward function for DynamicConv.
- Parameters
param_feature (Tensor) – The feature can be used to generate the parameter, has shape (num_all_proposals, in_channels).
input_feature (Tensor) – Feature that interact with parameters, has shape (num_all_proposals, in_channels, H, W).
- Returns
The output feature has shape (num_all_proposals, out_channels).
- Return type
Tensor
- class mmdet.models.utils.InvertedResidual(in_channels, out_channels, mid_channels, kernel_size=3, stride=1, se_cfg=None, with_expand_conv=True, conv_cfg=None, norm_cfg={'type': 'BN'}, act_cfg={'type': 'ReLU'}, drop_path_rate=0.0, with_cp=False, init_cfg=None)[source]¶
Inverted Residual Block.
- Parameters
in_channels (int) – The input channels of this Module.
out_channels (int) – The output channels of this Module.
mid_channels (int) – The input channels of the depthwise convolution.
kernel_size (int) – The kernel size of the depthwise convolution. Default: 3.
stride (int) – The stride of the depthwise convolution. Default: 1.
se_cfg (dict) – Config dict for se layer. Default: None, which means no se layer.
with_expand_conv (bool) – Use expand conv or not. If set False, mid_channels must be the same with in_channels. Default: True.
conv_cfg (dict) – Config dict for convolution layer. Default: None, which means using conv2d.
norm_cfg (dict) – Config dict for normalization layer. Default: dict(type=’BN’).
act_cfg (dict) – Config dict for activation layer. Default: dict(type=’ReLU’).
drop_path_rate (float) – stochastic depth rate. Defaults to 0.
with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
- Returns
The output tensor.
- Return type
Tensor
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.utils.LearnedPositionalEncoding(num_feats, row_num_embed=50, col_num_embed=50, init_cfg={'layer': 'Embedding', 'type': 'Uniform'})[source]¶
Position embedding with learnable embedding weights.
- Parameters
num_feats (int) – The feature dimension for each position along x-axis or y-axis. The final returned dimension for each position is 2 times of this value.
row_num_embed (int, optional) – The dictionary size of row embeddings. Default 50.
col_num_embed (int, optional) – The dictionary size of col embeddings. Default 50.
init_cfg (dict or list[dict], optional) – Initialization config dict.
- forward(mask)[source]¶
Forward function for LearnedPositionalEncoding.
- Parameters
mask (Tensor) – ByteTensor mask. Non-zero values representing ignored positions, while zero values means valid positions for this image. Shape [bs, h, w].
- Returns
- Returned position embedding with shape
[bs, num_feats*2, h, w].
- Return type
pos (Tensor)
- class mmdet.models.utils.NormedConv2d(*args, tempearture=20, power=1.0, eps=1e-06, norm_over_kernel=False, **kwargs)[source]¶
Normalized Conv2d Layer.
- Parameters
tempeature (float, optional) – Tempeature term. Default to 20.
power (int, optional) – Power term. Default to 1.0.
eps (float, optional) – The minimal value of divisor to keep numerical stability. Default to 1e-6.
norm_over_kernel (bool, optional) – Normalize over kernel. Default to False.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.utils.NormedLinear(*args, tempearture=20, power=1.0, eps=1e-06, **kwargs)[source]¶
Normalized Linear Layer.
- Parameters
tempeature (float, optional) – Tempeature term. Default to 20.
power (int, optional) – Power term. Default to 1.0.
eps (float, optional) – The minimal value of divisor to keep numerical stability. Default to 1e-6.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.utils.PatchEmbed(in_channels=3, embed_dims=768, conv_type='Conv2d', kernel_size=16, stride=16, padding='corner', dilation=1, bias=True, norm_cfg=None, input_size=None, init_cfg=None)[source]¶
Image to Patch Embedding.
We use a conv layer to implement PatchEmbed.
- Parameters
in_channels (int) – The num of input channels. Default: 3
embed_dims (int) – The dimensions of embedding. Default: 768
conv_type (str) – The config dict for embedding conv layer type selection. Default: “Conv2d.
kernel_size (int) – The kernel_size of embedding conv. Default: 16.
stride (int) – The slide stride of embedding conv. Default: None (Would be set as kernel_size).
padding (int | tuple | string) – The padding length of embedding conv. When it is a string, it means the mode of adaptive padding, support “same” and “corner” now. Default: “corner”.
dilation (int) – The dilation rate of embedding conv. Default: 1.
bias (bool) – Bias of embed conv. Default: True.
norm_cfg (dict, optional) – Config dict for normalization layer. Default: None.
input_size (int | tuple | None) – The size of input, which will be used to calculate the out size. Only work when dynamic_size is False. Default: None.
init_cfg (mmcv.ConfigDict, optional) – The Config for initialization. Default: None.
- class mmdet.models.utils.ResLayer(block, inplanes, planes, num_blocks, stride=1, avg_down=False, conv_cfg=None, norm_cfg={'type': 'BN'}, downsample_first=True, **kwargs)[source]¶
ResLayer to build ResNet style backbone.
- Parameters
block (nn.Module) – block used to build ResLayer.
inplanes (int) – inplanes of block.
planes (int) – planes of block.
num_blocks (int) – number of blocks.
stride (int) – stride of the first block. Default: 1
avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False
conv_cfg (dict) – dictionary to construct and config conv layer. Default: None
norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’BN’)
downsample_first (bool) – Downsample at the first block or last block. False for Hourglass, True for ResNet. Default: True
- class mmdet.models.utils.SELayer(channels, ratio=16, conv_cfg=None, act_cfg=({'type': 'ReLU'}, {'type': 'Sigmoid'}), init_cfg=None)[source]¶
Squeeze-and-Excitation Module.
- Parameters
channels (int) – The input (and output) channels of the SE layer.
ratio (int) – Squeeze ratio in SELayer, the intermediate channel will be
int(channels/ratio)
. Default: 16.conv_cfg (None or dict) – Config dict for convolution layer. Default: None, which means using conv2d.
act_cfg (dict or Sequence[dict]) – Config dict for activation layer. If act_cfg is a dict, two activation layers will be configurated by this dict. If act_cfg is a sequence of dicts, the first activation layer will be configurated by the first dict and the second activation layer will be configurated by the second dict. Default: (dict(type=’ReLU’), dict(type=’Sigmoid’))
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mmdet.models.utils.SimplifiedBasicBlock(inplanes, planes, stride=1, dilation=1, downsample=None, style='pytorch', with_cp=False, conv_cfg=None, norm_cfg={'type': 'BN'}, dcn=None, plugins=None, init_fg=None)[source]¶
Simplified version of original basic residual block. This is used in SCNet.
Norm layer is now optional
Last ReLU in forward function is removed
- property norm1¶
normalization layer after the first convolution layer
- Type
nn.Module
- property norm2¶
normalization layer after the second convolution layer
- Type
nn.Module
- class mmdet.models.utils.SinePositionalEncoding(num_feats, temperature=10000, normalize=False, scale=6.283185307179586, eps=1e-06, offset=0.0, init_cfg=None)[source]¶
Position encoding with sine and cosine functions.
See End-to-End Object Detection with Transformers for details.
- Parameters
num_feats (int) – The feature dimension for each position along x-axis or y-axis. Note the final returned dimension for each position is 2 times of this value.
temperature (int, optional) – The temperature used for scaling the position embedding. Defaults to 10000.
normalize (bool, optional) – Whether to normalize the position embedding. Defaults to False.
scale (float, optional) – A scale factor that scales the position embedding. The scale will be used only when normalize is True. Defaults to 2*pi.
eps (float, optional) – A value added to the denominator for numerical stability. Defaults to 1e-6.
offset (float) – offset add to embed when do the normalization. Defaults to 0.
init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None
- forward(mask)[source]¶
Forward function for SinePositionalEncoding.
- Parameters
mask (Tensor) – ByteTensor mask. Non-zero values representing ignored positions, while zero values means valid positions for this image. Shape [bs, h, w].
- Returns
- Returned position embedding with shape
[bs, num_feats*2, h, w].
- Return type
pos (Tensor)
- class mmdet.models.utils.Transformer(encoder=None, decoder=None, init_cfg=None)[source]¶
Implements the DETR transformer.
Following the official DETR implementation, this module copy-paste from torch.nn.Transformer with modifications:
positional encodings are passed in MultiheadAttention
extra LN at the end of encoder is removed
decoder returns a stack of activations from all decoding layers
See paper: End-to-End Object Detection with Transformers for details.
- Parameters
encoder (mmcv.ConfigDict | Dict) – Config of TransformerEncoder. Defaults to None.
decoder ((mmcv.ConfigDict | Dict)) – Config of TransformerDecoder. Defaults to None
(obj (init_cfg) – mmcv.ConfigDict): The Config for initialization. Defaults to None.
- forward(x, mask, query_embed, pos_embed)[source]¶
Forward function for Transformer.
- Parameters
x (Tensor) – Input query with shape [bs, c, h, w] where c = embed_dims.
mask (Tensor) – The key_padding_mask used for encoder and decoder, with shape [bs, h, w].
query_embed (Tensor) – The query embedding for decoder, with shape [num_query, c].
pos_embed (Tensor) – The positional encoding for encoder and decoder, with the same shape as x.
- Returns
results of decoder containing the following tensor.
- out_dec: Output from decoder. If return_intermediate_dec is True output has shape [num_dec_layers, bs,
num_query, embed_dims], else has shape [1, bs, num_query, embed_dims].
memory: Output results from encoder, with shape [bs, embed_dims, h, w].
- Return type
tuple[Tensor]
- mmdet.models.utils.adaptive_avg_pool2d(input, output_size)[source]¶
Handle empty batch dimension to adaptive_avg_pool2d.
- Parameters
input (tensor) – 4D tensor.
output_size (int, tuple[int,int]) – the target output size.
- mmdet.models.utils.build_linear_layer(cfg, *args, **kwargs)[source]¶
Build linear layer. :param cfg: The linear layer config, which should contain:
type (str): Layer type.
layer args: Args needed to instantiate an linear layer.
- Parameters
args (argument list) – Arguments passed to the __init__ method of the corresponding linear layer.
kwargs (keyword arguments) – Keyword arguments passed to the __init__ method of the corresponding linear layer.
- Returns
Created linear layer.
- Return type
nn.Module
- mmdet.models.utils.gaussian_radius(det_size, min_overlap)[source]¶
Generate 2D gaussian radius.
This function is modified from the official github repo.
Given
min_overlap
, radius could computed by a quadratic equation according to Vieta’s formulas.There are 3 cases for computing gaussian radius, details are following:
Explanation of figure:
lt
andbr
indicates the left-top and bottom-right corner of ground truth box.x
indicates the generated corner at the limited position whenradius=r
.Case1: one corner is inside the gt box and the other is outside.
|< width >| lt-+----------+ - | | | ^ +--x----------+--+ | | | | | | | | height | | overlap | | | | | | | | | | v +--+---------br--+ - | | | +----------+--x
To ensure IoU of generated box and gt box is larger than
min_overlap
:\[\begin{split}\cfrac{(w-r)*(h-r)}{w*h+(w+h)r-r^2} \ge {iou} \quad\Rightarrow\quad {r^2-(w+h)r+\cfrac{1-iou}{1+iou}*w*h} \ge 0 \\ {a} = 1,\quad{b} = {-(w+h)},\quad{c} = {\cfrac{1-iou}{1+iou}*w*h} {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a}\end{split}\]Case2: both two corners are inside the gt box.
|< width >| lt-+----------+ - | | | ^ +--x-------+ | | | | | | |overlap| | height | | | | | +-------x--+ | | | v +----------+-br -
To ensure IoU of generated box and gt box is larger than
min_overlap
:\[\begin{split}\cfrac{(w-2*r)*(h-2*r)}{w*h} \ge {iou} \quad\Rightarrow\quad {4r^2-2(w+h)r+(1-iou)*w*h} \ge 0 \\ {a} = 4,\quad {b} = {-2(w+h)},\quad {c} = {(1-iou)*w*h} {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a}\end{split}\]Case3: both two corners are outside the gt box.
|< width >| x--+----------------+ | | | +-lt-------------+ | - | | | | ^ | | | | | | overlap | | height | | | | | | | | v | +------------br--+ - | | | +----------------+--x
To ensure IoU of generated box and gt box is larger than
min_overlap
:\[\begin{split}\cfrac{w*h}{(w+2*r)*(h+2*r)} \ge {iou} \quad\Rightarrow\quad {4*iou*r^2+2*iou*(w+h)r+(iou-1)*w*h} \le 0 \\ {a} = {4*iou},\quad {b} = {2*iou*(w+h)},\quad {c} = {(iou-1)*w*h} \\ {r} \le \cfrac{-b+\sqrt{b^2-4*a*c}}{2*a}\end{split}\]- Parameters
det_size (list[int]) – Shape of object.
min_overlap (float) – Min IoU with ground truth for boxes generated by keypoints inside the gaussian kernel.
- Returns
Radius of gaussian kernel.
- Return type
radius (int)
- mmdet.models.utils.gen_gaussian_target(heatmap, center, radius, k=1)[source]¶
Generate 2D gaussian heatmap.
- Parameters
heatmap (Tensor) – Input heatmap, the gaussian kernel will cover on it and maintain the max value.
center (list[int]) – Coord of gaussian kernel’s center.
radius (int) – Radius of gaussian kernel.
k (int) – Coefficient of gaussian kernel. Default: 1.
- Returns
Updated heatmap covered by gaussian kernel.
- Return type
out_heatmap (Tensor)
- mmdet.models.utils.get_uncertain_point_coords_with_randomness(mask_pred, labels, num_points, oversample_ratio, importance_sample_ratio)[source]¶
Get
num_points
most uncertain points with random points during train.Sample points in [0, 1] x [0, 1] coordinate space based on their uncertainty. The uncertainties are calculated for each point using ‘get_uncertainty()’ function that takes point’s logit prediction as input.
- Parameters
mask_pred (Tensor) – A tensor of shape (num_rois, num_classes, mask_height, mask_width) for class-specific or class-agnostic prediction.
labels (list) – The ground truth class for each instance.
num_points (int) – The number of points to sample.
oversample_ratio (int) – Oversampling parameter.
importance_sample_ratio (float) – Ratio of points that are sampled via importnace sampling.
- Returns
- A tensor of shape (num_rois, num_points, 2)
that contains the coordinates sampled points.
- Return type
point_coords (Tensor)
- mmdet.models.utils.get_uncertainty(mask_pred, labels)[source]¶
Estimate uncertainty based on pred logits.
We estimate uncertainty as L1 distance between 0.0 and the logits prediction in ‘mask_pred’ for the foreground class in classes.
- Parameters
mask_pred (Tensor) – mask predication logits, shape (num_rois, num_classes, mask_height, mask_width).
labels (list[Tensor]) – Either predicted or ground truth label for each predicted mask, of length num_rois.
- Returns
- Uncertainty scores with the most uncertain
locations having the highest uncertainty score, shape (num_rois, 1, mask_height, mask_width)
- Return type
scores (Tensor)
- mmdet.models.utils.interpolate_as(source, target, mode='bilinear', align_corners=False)[source]¶
Interpolate the source to the shape of the target.
The source must be a Tensor, but the target can be a Tensor or a np.ndarray with the shape (…, target_h, target_w).
- Parameters
source (Tensor) – A 3D/4D Tensor with the shape (N, H, W) or (N, C, H, W).
target (Tensor | np.ndarray) – The interpolation target with the shape (…, target_h, target_w).
mode (str) – Algorithm used for interpolation. The options are the same as those in F.interpolate(). Default:
'bilinear'
.align_corners (bool) – The same as the argument in F.interpolate().
- Returns
The interpolated source Tensor.
- Return type
Tensor
- mmdet.models.utils.make_divisible(value, divisor, min_value=None, min_ratio=0.9)[source]¶
Make divisible function.
This function rounds the channel number to the nearest value that can be divisible by the divisor. It is taken from the original tf repo. It ensures that all layers have a channel number that is divisible by divisor. It can be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py # noqa
- Parameters
value (int) – The original channel number.
divisor (int) – The divisor to fully divide the channel number.
min_value (int) – The minimum value of the output channel. Default: None, means that the minimum value equal to the divisor.
min_ratio (float) – The minimum ratio of the rounded channel number to the original channel number. Default: 0.9.
- Returns
The modified output channel number.
- Return type
int
- mmdet.models.utils.nchw_to_nlc(x)[source]¶
Flatten [N, C, H, W] shape tensor to [N, L, C] shape tensor.
- Parameters
x (Tensor) – The input tensor of shape [N, C, H, W] before conversion.
- Returns
The output tensor of shape [N, L, C] after conversion.
- Return type
Tensor
- mmdet.models.utils.nlc_to_nchw(x, hw_shape)[source]¶
Convert [N, L, C] shape tensor to [N, C, H, W] shape tensor.
- Parameters
x (Tensor) – The input tensor of shape [N, L, C] before conversion.
hw_shape (Sequence[int]) – The height and width of output feature map.
- Returns
The output tensor of shape [N, C, H, W] after conversion.
- Return type
Tensor
- mmdet.models.utils.preprocess_panoptic_gt(gt_labels, gt_masks, gt_semantic_seg, num_things, num_stuff, img_metas)[source]¶
Preprocess the ground truth for a image.
- Parameters
gt_labels (Tensor) – Ground truth labels of each bbox, with shape (num_gts, ).
gt_masks (BitmapMasks) – Ground truth masks of each instances of a image, shape (num_gts, h, w).
gt_semantic_seg (Tensor | None) – Ground truth of semantic segmentation with the shape (1, h, w). [0, num_thing_class - 1] means things, [num_thing_class, num_class-1] means stuff, 255 means VOID. It’s None when training instance segmentation.
img_metas (dict) – List of image meta information.
- Returns
a tuple containing the following targets.
- labels (Tensor): Ground truth class indices for a
image, with shape (n, ), n is the sum of number of stuff type and number of instance in a image.
- masks (Tensor): Ground truth mask for a image, with
shape (n, h, w). Contains stuff and things when training panoptic segmentation, and things only when training instance segmentation.
- Return type
tuple