DataLoader( dataset = train_dataset, batch_size = batch_size, shuffle = True) # Implement function and return a dataloader :param data_dir: Directory where image data is located :param img_size: The square size of the image data (x, y) :param batch_size: The size of each batch the number of images in a batch # To create a dataset given a directory of images, it's recommended that you use PyTorch's () wrapper, with a root directory `processed_celeba_small/` and data transformation passed in.ĭef get_dataloader( batch_size, image_size, data_dir = 'processed_celeba_small/'):īatch the neural network data using DataLoader # * Your function should return a DataLoader that shuffles and batches these Tensor images. # * Your images should be square, Tensor images of size `image_size x image_size` in the x and y dimension. # Exercise: Complete the following `get_dataloader` function, such that it satisfies these requirements: # > There are a few other steps that you'll need to **transform** this data and create a **DataLoader**. This *pre-processed* dataset is a smaller subset of the very large CelebA data. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64圆4x3 NumPy images. # Since the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Note that these are color images with ((digital_image)#RGB_Images) each. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. # The () dataset contains over 200,000 celebrity images with annotations. After extracting the data, you should be left with a directory of data `processed_celeba_small/` # This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. # > If you are working locally, you can download this data () It is suggested that you utilize a GPU for training. # This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. # You'll be using the () to train your adversarial networks. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs your generated samples should look like fairly realistic faces with small amounts of noise. # The project will be broken down into a series of tasks from **loading in data to defining and training adversarial networks**. Your goal is to get a generator network to generate *new* images of faces that look as realistic as possible! # In this project, you'll define and train a DCGAN on a dataset of faces.
0 Comments
Leave a Reply. |