Try out Andrew Ns Bottom up style Machine learning course on Coursera link

Kaggle has a Python based tool to download its data pip install kaggle --upgrade

Pandas

It is the standard to accessing tabular data

Ex : This shows the header and first few rows of a csv

df = pd.read_csv(path/'train_v2.csv')
df.head()

Data Block API

https://docs.fast.ai/data_block.html

Dataset just defines getItem and len which has to be implemented by all types of datasets

Creating data set ex: here src: DataSet

src = (ImageList.from_csv(path, 'train_v2.csv', folder='train-jpg', suffix='.jpg')
       .split_by_rand_pct(0.2)
       .label_from_df(label_delim=' '))

DataLoader is used to create minibatches where you specify the batch size and dataset.

DataBunch binds together a training_data_loader and a valid_data_loader with optionally a test_data_loader.

Creating dataloader follwed by data bunch in one go ex:

data = (src.transform(tfms, size=128)
        .databunch().normalize(imagenet_stats))

COCO is the famous dataset for object detection.

Transforms will flip the image horizontally at randomly so make data better by default but some you ight need to flip them vertically too.

Ex tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)

f_score: it is a single number which is used measure false positive, false negative etc, there are diffrent types like f1, f2. Kaggle uses f2 score to judge.

Arg_max = the class with highest accuracy.

Accuracy_thresh is used when th classification can lead to more than one class. we can pass minimum threshold to it.

Partial function: a same function with ceratin parameter is fixed.

somefn(x1, x2)
somefn_x2_5 = somefn(x2, x2 = 5) #is same as
partial(somefn, x2=5)

Transfer learning: In the planet ex we initially used 128x128 now we can use the same learner and create a new data bunch with 256x256 and learn on existing learner which will keep things which were already learned and will add more to it.

Choosing learning rate:
Most of the initial lr looks the same, after the first set of fitting we try to chose lr which is 10x back from the point when it is shooting up and run it from taht to the the initial lr.

Collection of data sets: https://course.fast.ai/datasets

Segmentation

we use Learner.create_unet which us better model for segmentation than cnn.

Learner.create_unet(****).to_fp16 mixed flosting point precision. By deafult everything uses 32 bit precision for Floating points. But fastai library supports mixed precision which can lead to faster training time and sometimes to better results. Given that you have latest graphics card that supports it and also the latest drivers.

Face Center example

regression = when output is some continous number.

NLP Classification

We directly process text as it is not number so we need to convert it. This is done in two differents steps: tokenization and numericalization.

Tokenization

The first step of processing we make the texts go through is to split the raw sentences into words, or more exactly tokens. The easiest way to do this would be to split the string on spaces, but we can be smarter:

  • we need to take care of punctuation
  • some words are contractions of two different words, like isn’t or don’t
  • we may need to clean some parts of our texts, if there’s HTML code for instance

In text classification we do two steps

  1. Language model

  2. Classification model

What is Deep Learning

Basically

(Input Matrix X Weigth Matrix -> Activation Fn) -> (Input Matrix X Weigth Matrix -> Activation Fn) ->(Input Matrix X Weigth Matrix -> Activation Fn)…… See how close it is to target (minimize loss fn)

Update Weight matrix using gradient descent the Repeat thats it.

Use this learn more http://neuralnetworksanddeeplearning.com

Activation fn

It the function used on the output after matrix multipliction.

Initially we used to use Binary step, logistic, TanH, ArcTan etc

But nowadays everyone uses Rectified Liner Unit (ReLU) which is nothing but max(X,0) that zeroing out all negatives.

From Course v4

Script to Run a Train a Model and Predict End to End

from fastbook import *
from fastai.vision.widgets import *
#Assuming Data is downloaded
bear_types = 'grizzly','black','teddys'
path = Path('../data/bears')
#setup a data block valid_pct=0.2 means use 20 % data as validation 
bears = DataBlock(
    blocks=(ImageBlock, CategoryBlock), 
    get_items=get_image_files, 
    splitter=RandomSplitter(valid_pct=0.2, seed=42),
    get_y=parent_label,
    item_tfms=Resize(128))
#modify the data block to use RandomResizedCrop and augmentations
bears = bears.new(item_tfms=RandomResizedCrop(5, nrows=1)(128), batch_tfms=aug_transforms())
#load data
dls = bears.dataloaders(path)
dls.train.show_batch(max_n=8, nrows=2, unique=True)
#create convolution neural network with resnet18 as the base model
learn = cnn_learner(dls, resnet18, metrics=error_rate)
#fine tune/use transfer learning for 4 epochs
learn.fine_tune(4)
#generate a interpretation
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
#plot top 5 losses or high confident wrong answers
interp.plot_top_losses(5, nrows=1)
#save data as pkl file
learn.export('../data/bears/new_bear.pkl')
#load saved model for inference
learn_inf = load_learner(path/'new_bear.pkl')
#predic a given image
learn_inf.predict('../data/bears/grizzly/00000068.jpg')

Ethics

Recommendation systems are predicting what content people will like, but they also have a lot of power in determining what content people even see.