티스토리 뷰

여기저기 흩어져있던 정리내용 취합하고 있는데 해당 코드가 과거 블로그에만 있고 git이 날라가 있다.... 뭔가 그렇다 좀... 정리도 잘하고, 공부나 더하자.

 

해당 코드는 pytorch tutorial을 가져온 것이다.

링크 : https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

 

python : 3.7.5

torch :1.8.1+cu111

pycocotools : 2.0.2

 

0. 환경 설정 링크는 다음을 참고한다.

https://mizzlena.tistory.com/entry/%EC%9D%B8%EA%B3%B5%EC%A7%80%EB%8A%A5-Pytorch-Install

 

[인공지능] Pytorch Install

Python Pytorch Install with GPU python : 3.7.5 해당 기준은 python을 기준으로 설치하는 버전입니다. conda를 사용하지 않았으며, 가상환경 생성을 통해 해당 python package를 설치합니다. 1. GPI : https://..

mizzlena.tistory.com

 

1. 가상 환경에 pycocotools를 설치한다

pip install pycocotools

 

2. 영상 파일을 다운로드 한다.

https://www.cis.upenn.edu/~jshi/ped_html/PennFudanPed.zip

 

3. Git에서 추가 파일을 다운로드 한다.

https://github.com/pytorch/vision

 

GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision

Datasets, Transforms and Models specific to Computer Vision - GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision

github.com

  • vision/refrences/detection의 폴더를 복사한다.
  • 실행할 파일의 경로에 붙여 넣는다.

 

4. 파일을 수정한다.

  • 해당 부분은 Reperence 마다 차이가 있으며, 버전업이 됨에 따라 일치하지 않을 수 있다. 
  • 꼭, pytorch 버전 및 vision 버전을 확인한다.
  • import 경로를 수정한다.
  • 해당 파일은 각각 다음과 같다.
    • coco_eval.py
    • coco_utils.py
    • engine.py
    • presets.py
    • train.py
  import utils => from . import utils
  import transforms as T => from . import transforms as T
  import presets = > from . import presets
  from coco_utils => from .coco_utils
  from coco_eval => from .coco_eval
  from group_by_aspect_ratio => from .group_by_aspect_ratio
  from engine => from .engine

 

5. 코드를 실행한다.

import os
import numpy as np
import torch
from PIL import Image

import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor

from vision.references.detection.engine import train_one_epoch, evaluate
import vision.references.detection.utils as utils
import vision.references.detection.transforms as T

def get_transform(train):
    transforms = []
    # converts the image, a PIL image, into a PyTorch Tensor
    transforms.append(T.ToTensor())
    if train:
        # during training, randomly flip the training images
        # and ground-truth for data augmentation
        transforms.append(T.RandomHorizontalFlip(0.5))
    return T.Compose(transforms)

class PennFudanDataset(object):
    def __init__(self, root, transforms):
        self.root = root
        self.transforms = transforms
        # load all image files, sorting them to
        # ensure that they are aligned
        self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages"))))
        self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))

    def __getitem__(self, idx):
        # load images and masks
        img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
        mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
        img = Image.open(img_path).convert("RGB")
        # note that we haven't converted the mask to RGB,
        # because each color corresponds to a different instance
        # with 0 being background
        mask = Image.open(mask_path)

        mask = np.array(mask)
        # instances are encoded as different colors
        obj_ids = np.unique(mask)
        # first id is the background, so remove it
        obj_ids = obj_ids[1:]

        # split the color-encoded mask into a set
        # of binary masks
        masks = mask == obj_ids[:, None, None]

        # get bounding box coordinates for each mask
        num_objs = len(obj_ids)
        boxes = []
        for i in range(num_objs):
            pos = np.where(masks[i])
            xmin = np.min(pos[1])
            xmax = np.max(pos[1])
            ymin = np.min(pos[0])
            ymax = np.max(pos[0])
            boxes.append([xmin, ymin, xmax, ymax])

        boxes = torch.as_tensor(boxes, dtype=torch.float32)
        # there is only one class
        labels = torch.ones((num_objs,), dtype=torch.int64)
        masks = torch.as_tensor(masks, dtype=torch.uint8)

        image_id = torch.tensor([idx])
        area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
        # suppose all instances are not crowd
        iscrowd = torch.zeros((num_objs,), dtype=torch.int64)

        target = {}
        target["boxes"] = boxes
        target["labels"] = labels
        target["masks"] = masks
        target["image_id"] = image_id
        target["area"] = area
        target["iscrowd"] = iscrowd

        if self.transforms is not None:
            img, target = self.transforms(img, target)

        return img, target

    def __len__(self):
        return len(self.imgs)

def get_model_instance_segmentation(num_classes):
    # load an instance segmentation model pre-trained pre-trained on COCO
    model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)

    # get number of input features for the classifier
    in_features = model.roi_heads.box_predictor.cls_score.in_features
    # replace the pre-trained head with a new one
    model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)

    # now get the number of input features for the mask classifier
    in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels
    hidden_layer = 256
    # and replace the mask predictor with a new one
    model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,
                                                       hidden_layer,
                                                       num_classes)

    return model


def main():
    # train on the GPU or on the CPU, if a GPU is not available
    device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')

    # our dataset has two classes only - background and person
    num_classes = 2
    # use our dataset and defined transformations
    dataset = PennFudanDataset('PennFudanPed', get_transform(train=True))
    dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False))

    # split the dataset in train and test set
    indices = torch.randperm(len(dataset)).tolist()
    dataset = torch.utils.data.Subset(dataset, indices[:-50])
    dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])

    # define training and validation data loaders
    data_loader = torch.utils.data.DataLoader(
        dataset, batch_size=2, shuffle=True, num_workers=4,
        collate_fn=utils.collate_fn)

    data_loader_test = torch.utils.data.DataLoader(
        dataset_test, batch_size=1, shuffle=False, num_workers=4,
        collate_fn=utils.collate_fn)

    # get the model using our helper function
    model = get_model_instance_segmentation(num_classes)

    # move model to the right device
    model.to(device)
    print(model)

    # construct an optimizer
    params = [p for p in model.parameters() if p.requires_grad]
    #print(params)
    optimizer = torch.optim.SGD(params, lr=0.005,
                                momentum=0.9, weight_decay=0.0005)
    # and a learning rate scheduler
    lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
                                                   step_size=3,
                                                   gamma=0.1)

    # let's train it for 10 epochs
    #num_epochs = 10
    num_epochs = 1

    for epoch in range(num_epochs):
        # train for one epoch, printing every 10 iterations
        train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
        # update the learning rate
        lr_scheduler.step()
        # evaluate on the test dataset
        evaluate(model, data_loader_test, device=device)

    print("That's it!")

    ##################################
    img, _ = dataset_test[10]
    # put the model in evaluation mode
    model.eval()
    with torch.no_grad():
        prediction = model([img.to(device)])

    origin_img = Image.fromarray(img.mul(255).permute(1, 2, 0).byte().numpy())
    mask_img = Image.fromarray(prediction[0]['masks'][0, 0].mul(255).byte().cpu().numpy())

    origin_img.show()
    mask_img.show()

if __name__ == "__main__":
    main()

 

6. 결과

 

 

7. Git

https://github.com/MizzleAa/Pytorch-Object-Detection-Fintuning-Tutorial

 

GitHub - MizzleAa/Pytorch-Object-Detection-Fintuning-Tutorial

Contribute to MizzleAa/Pytorch-Object-Detection-Fintuning-Tutorial development by creating an account on GitHub.

github.com

 

반응형
댓글
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/12   »
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
글 보관함