티스토리 뷰
커밋상태를 보니 이틀이나 걸렸다. 일다니면서 야간에 공부하니까 그럴수 있지만 지금보니.... 좀더 분발하자...
※ 인공지능 이론 설명은 하지 않습니다.
Pytorch에서 기본으로 재공해주는 라이브러리를 통해 학습 및 예측을 수행한다.
python : 3.7.5
torch :1.8.1+cu111
pycocotools : 2.0.2
1. Fast & Mask R-CNN 튜닝 작업을 수행한다. 각 기능이 어떤것이 있는지 이해하기 위해 추가 설명을 작성한다.
Faster_Mask_RCNN
| detection
└─| coco_eval.py
| coco_utils.py
│ engine.py
│ group_by_aspect_ratio.py
│ presets.py
│ train.py
│ transforms.py
│ utils.py
| PenFudanPed
| active.py
| datasets.py
| networks.py
| run.py
- detection : pytorch에서 기본으로 재공해주는 라이브러리 예시이다.
- PenFudanPed : 학습 및 테스트를 위한 영상 데이터 폴더이다.(PenFudanPed - PASCAL Annotation Version 1.00)
- active.py : Train / Predict / View에 대한 내용이 있는 Class이다.
- datasets.py : 데이터 전처리를 수행한다.
- networks.py : 데이터 모델을 생성한다.
- run.py : 해당 모델이 올바르게 돌아가는지 확인하기 위한 용도이다.
2. 가상환경 파일을 설정한다.
- 환경 설정 링크는 다음을 참고한다.
https://mizzlena.tistory.com/entry/%EC%9D%B8%EA%B3%B5%EC%A7%80%EB%8A%A5-Pytorch-Install
- 가상환경 파일을 수정한다.
# 경로 위치 : venv\lib\site-packages\torchvision\models\detection\faster_rcnn.py
###################
# 17 line
__all__ = [
"FasterRCNN", "fasterrcnn_resnet_fpn", "fasterrcnn_resnet50_fpn", "fasterrcnn_mobilenet_v3_large_320_fpn",
"fasterrcnn_mobilenet_v3_large_fpn"
]
###################
# write
def fasterrcnn_resnet_fpn(net='resnet50', pretrained=False, progress=True,
num_classes=91, pretrained_backbone=True, trainable_backbone_layers=None, **kwargs):
trainable_backbone_layers = _validate_trainable_layers(
pretrained or pretrained_backbone, trainable_backbone_layers, 5, 3)
backbone = resnet_fpn_backbone(
net, pretrained_backbone, trainable_layers=trainable_backbone_layers)
model = FasterRCNN(backbone, num_classes, **kwargs)
return model
# venv\lib\site-packages\torchvision\models\detection\mask_rcnn.py
###################
# 13 line
__all__ = [
"MaskRCNN", "maskrcnn_resnet_fpn", "maskrcnn_resnet50_fpn",
]
###################
# write
def maskrcnn_resnet_fpn(net='resnet50', pretrained=False, progress=True,
num_classes=91, pretrained_backbone=True, trainable_backbone_layers=None, **kwargs):
trainable_backbone_layers = _validate_trainable_layers(
pretrained or pretrained_backbone, trainable_backbone_layers, 5, 3)
backbone = resnet_fpn_backbone(
net, pretrained_backbone, trainable_layers=trainable_backbone_layers)
model = MaskRCNN(backbone, num_classes, **kwargs)
return model
3. import 파일을 수정한다.
import utils => from . import utils
import transforms as T => from . import transforms as T
import presets = > from . import presets from coco_utils => from .coco_utils
from coco_eval => from .coco_eval
from group_by_aspect_ratio => from .group_by_aspect_ratio
from engine => from .engine
4. 코드 실행을 위해 깃에 들어가 파일을 다운 받는다.
https://github.com/MizzleAa/Pytorch-Object-Detection-Fintuning-Tutorial-2
5. 가상환경 접속 후 run.py 를 수행한다.
6. 결과
- 모델 정보
MaskRCNN(
(transform): GeneralizedRCNNTransform(
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
Resize(min_size=(800,), max_size=1333, mode='bilinear')
)
(backbone): BackboneWithFPN(
(body): IntermediateLayerGetter(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): FrozenBatchNorm2d(64, eps=0.0)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(64, eps=0.0)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(64, eps=0.0)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(256, eps=0.0)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): FrozenBatchNorm2d(256, eps=0.0)
)
)
(1): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(64, eps=0.0)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(64, eps=0.0)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(256, eps=0.0)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(64, eps=0.0)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(64, eps=0.0)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(256, eps=0.0)
(relu): ReLU(inplace=True)
)
)
(layer2): Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(128, eps=0.0)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(128, eps=0.0)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(512, eps=0.0)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): FrozenBatchNorm2d(512, eps=0.0)
)
)
(1): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(128, eps=0.0)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(128, eps=0.0)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(512, eps=0.0)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(128, eps=0.0)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(128, eps=0.0)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(512, eps=0.0)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(128, eps=0.0)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(128, eps=0.0)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(512, eps=0.0)
(relu): ReLU(inplace=True)
)
)
(layer3): Sequential(
(0): Bottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): FrozenBatchNorm2d(1024, eps=0.0)
)
)
(1): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
(4): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
(5): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
)
(layer4): Sequential(
(0): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(512, eps=0.0)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(512, eps=0.0)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(2048, eps=0.0)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): FrozenBatchNorm2d(2048, eps=0.0)
)
)
(1): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(512, eps=0.0)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(512, eps=0.0)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(2048, eps=0.0)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(512, eps=0.0)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(512, eps=0.0)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(2048, eps=0.0)
(relu): ReLU(inplace=True)
)
)
)
(fpn): FeaturePyramidNetwork(
(inner_blocks): ModuleList(
(0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
(2): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(3): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
)
(layer_blocks): ModuleList(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(extra_blocks): LastLevelMaxPool()
)
)
(rpn): RegionProposalNetwork(
(anchor_generator): AnchorGenerator()
(head): RPNHead(
(conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(cls_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
(bbox_pred): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
)
)
(roi_heads): RoIHeads(
(box_roi_pool): MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'], output_size=(7, 7), sampling_ratio=2)
(box_head): TwoMLPHead(
(fc6): Linear(in_features=12544, out_features=1024, bias=True)
(fc7): Linear(in_features=1024, out_features=1024, bias=True)
)
(box_predictor): FastRCNNPredictor(
(cls_score): Linear(in_features=1024, out_features=2, bias=True)
(bbox_pred): Linear(in_features=1024, out_features=8, bias=True)
)
(mask_roi_pool): MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'], output_size=(14, 14), sampling_ratio=2)
(mask_head): MaskRCNNHeads(
(mask_fcn1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu1): ReLU(inplace=True)
(mask_fcn2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu2): ReLU(inplace=True)
(mask_fcn3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu3): ReLU(inplace=True)
(mask_fcn4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu4): ReLU(inplace=True)
)
(mask_predictor): MaskRCNNPredictor(
(conv5_mask): ConvTranspose2d(256, 256, kernel_size=(2, 2), stride=(2, 2))
(relu): ReLU(inplace=True)
(mask_fcn_logits): Conv2d(256, 2, kernel_size=(1, 1), stride=(1, 1))
)
)
)
- 학습 과정
Epoch: [0] [ 0/60] eta: 0:05:27 lr: 0.000090 loss: 3.5170 (3.5170) loss_classifier: 0.7805 (0.7805) loss_box_reg: 0.2475 (0.2475) loss_mask: 2.4583 (2.4583) loss_objectness: 0.0268 (0.026.0038 (0.0038) time: 5.4573 data: 2.4613 max mem: 2088
Epoch: [0] [10/60] eta: 0:00:33 lr: 0.000936 loss: 1.5048 (2.1420) loss_classifier: 0.5245 (0.5054) loss_box_reg: 0.3609 (0.3375) loss_mask: 0.6584 (1.2701) loss_objectness: 0.0256 (0.022.0052 (0.0063) time: 0.6632 data: 0.2247 max mem: 2710
Epoch: [0] [20/60] eta: 0:00:17 lr: 0.001783 loss: 0.9004 (1.4188) loss_classifier: 0.1836 (0.3365) loss_box_reg: 0.2167 (0.2734) loss_mask: 0.3969 (0.7848) loss_objectness: 0.0133 (0.018.0052 (0.0059) time: 0.1819 data: 0.0011 max mem: 2710
Epoch: [0] [30/60] eta: 0:00:10 lr: 0.002629 loss: 0.6103 (1.1590) loss_classifier: 0.1277 (0.2633) loss_box_reg: 0.2056 (0.2716) loss_mask: 0.2112 (0.6018) loss_objectness: 0.0064 (0.0157) loss_rpn_box_reg: 0.0059 (0.0065) time: 0.1839 data: 0.0012 max mem: 3044
Epoch: [0] [40/60] eta: 0:00:06 lr: 0.003476 loss: 0.5469 (0.9926) loss_classifier: 0.0775 (0.2164) loss_box_reg: 0.2492 (0.2628) loss_mask: 0.1701 (0.4942) loss_objectness: 0.0042 (0.0128) loss_rpn_box_reg: 0.0066 (0.0063) time: 0.1776 data: 0.0010 max mem: 3044
Epoch: [0] [50/60] eta: 0:00:02 lr: 0.004323 loss: 0.3702 (0.8682) loss_classifier: 0.0490 (0.1822) loss_box_reg: 0.1522 (0.2366) loss_mask: 0.1591 (0.4324) loss_objectness: 0.0012 (0.0106) loss_rpn_box_reg: 0.0054 (0.0064) time: 0.1690 data: 0.0010 max mem: 3044
Epoch: [0] [59/60] eta: 0:00:00 lr: 0.005000 loss: 0.3385 (0.7928) loss_classifier: 0.0381 (0.1615) loss_box_reg: 0.1197 (0.2211) loss_mask: 0.1584 (0.3941) loss_objectness: 0.0007 (0.0095) loss_rpn_box_reg: 0.0054 (0.0066) time: 0.1746 data: 0.0010 max mem: 3044
Epoch: [0] Total time: 0:00:16 (0.2679 s / it)
- 학습 결과
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.679
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.986
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.839
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.650
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.618
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.689
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.283
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.738
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.738
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.650
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.685
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.745
IoU metric: segm
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.690
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.986
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.882
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.501
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.512
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.706
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.280
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.731
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.731
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.550
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.677
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.740
- 출력
반응형
'Record > AI' 카테고리의 다른 글
[인공지능] X-ray 관련 학습 데이터 링크 (0) | 2022.05.10 |
---|---|
[인공지능] 논문 리뷰( Deep Convolutional Neural Network Based Object ) (0) | 2022.05.09 |
[인공지능] Pytorch Object Detection Fintuning Tutorial 1 (0) | 2022.05.09 |
[인공지능] Pytorch Tensorboard (0) | 2022.05.09 |
[인공지능] Pytorch Tensor 공부 (0) | 2022.05.09 |
댓글
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- segment anything
- pytest
- gitmoji
- 산업용
- Example
- 연관 이미지 검색
- React Redux
- 이미지 잡음
- SI Xray
- C#
- 16bit 합성
- X-RAY
- Filter
- OPI Xray
- fastapi
- ROI
- X-ray Dataset
- faiss
- X-ray 합성
- torchvision
- REACT
- 산업용 X-Ray
- image noise
- Python
- Prototype
- Ai
- 인공지능
- GD Xray
- cascade Mask R-CNN
- pytorch
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
글 보관함