Skip to content

Commit

Permalink
Update to ML 2.0
Browse files Browse the repository at this point in the history
  • Loading branch information
andrewdalpino committed Apr 17, 2022
1 parent ce10f64 commit e63cda5
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 16 deletions.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2021 Andrew DalPino
Copyright (c) 2022 Andrew DalPino

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ $dataset = new Labeled($samples, $labels);
We're going to use a transformer [Pipeline](https://docs.rubixml.com/latest/pipeline.html) to shape the dataset into the correct format for our learner. We know that the size of each sample image in the MNIST dataset is 28 x 28 pixels, but just to make sure that future samples are always the correct input size we'll add an [Image Resizer](https://docs.rubixml.com/latest/transformers/image-resizer.html). Then, to convert the image into raw pixel data we'll use the [Image Vectorizer](https://docs.rubixml.com/latest/transformers/image-vectorizer.html) which extracts continuous raw color channel values from the image. Since the sample images are black and white, we only need to use 1 color channel per pixel. At the end of the pipeline we'll center and scale the dataset using the [Z Scale Standardizer](https://docs.rubixml.com/latest/transformers/z-scale-standardizer.html) to help speed up the convergence of the neural network.

### Instantiating the Learner
Now, we'll go ahead and instantiate our [Multilayer Perceptron](https://docs.rubixml.com/latest/classifiers/multilayer-perceptron.html) classifier. Let's consider a neural network architecture suited for the MNIST problem consisting of 3 groups of [Dense](https://docs.rubixml.com/latest/neural-network/hidden-layers/dense.html) neuronal layers, followed by a [Leaky ReLU](https://docs.rubixml.com/latest/neural-network/activation-functions/leaky-relu.html) activation layer, and then a mild [Dropout](https://docs.rubixml.com/latest/neural-network/hidden-layers/dropout.html) layer to act as a regularizer. The output layer adds an additional layer of neurons with a [Softmax](https://docs.rubixml.com/latest/neural-network/activation-functions/softmax.html) activation making this particular network architecture 4 layers deep.
Now, we'll go ahead and instantiate our [Multilayer Perceptron](https://docs.rubixml.com/latest/classifiers/multilayer-perceptron.html) classifier. Let's consider a neural network architecture suited for the MNIST problem consisting of 3 groups of [Dense](https://docs.rubixml.com/latest/neural-network/hidden-layers/dense.html) neuronal layers, followed by a [ReLU](https://docs.rubixml.com/latest/neural-network/activation-functions/relu.html) activation layer, and then a mild [Dropout](https://docs.rubixml.com/latest/neural-network/hidden-layers/dropout.html) layer to act as a regularizer. The output layer adds an additional layer of neurons with a [Softmax](https://docs.rubixml.com/latest/neural-network/activation-functions/softmax.html) activation making this particular network architecture 4 layers deep.

Next, we'll set the batch size to 256. The batch size is the number of samples sent through the network at a time. We'll also specify an optimizer and learning rate which determines the update step of the Gradient Descent algorithm. The [Adam](https://docs.rubixml.com/latest/neural-network/optimizers/adam.html) optimizer uses a combination of [Momentum](https://docs.rubixml.com/latest/neural-network/optimizers/momentum.html) and [RMS Prop](https://docs.rubixml.com/latest/neural-network/optimizers/rms-prop.html) to make its updates and usually converges faster than standard *stochastic* Gradient Descent. It uses a global learning rate to control the magnitude of the step which we'll set to 0.0001 for this example.

Expand All @@ -69,7 +69,7 @@ use Rubix\ML\Classifiers\MultiLayerPerceptron;
use Rubix\ML\NeuralNet\Layers\Dense;
use Rubix\ML\NeuralNet\Layers\Dropout;
use Rubix\ML\NeuralNet\Layers\Activation;
use Rubix\ML\NeuralNet\ActivationFunctions\LeakyReLU;
use Rubix\ML\NeuralNet\ActivationFunctions\ReLU;
use Rubix\ML\NeuralNet\Optimizers\Adam;
use Rubix\ML\Persisters\Filesystem;

Expand All @@ -80,13 +80,13 @@ $estimator = new PersistentModel(
new ZScaleStandardizer(),
], new MultiLayerPerceptron([
new Dense(100),
new Activation(new LeakyReLU()),
new Activation(new ReLU()),
new Dropout(0.2),
new Dense(100),
new Activation(new LeakyReLU()),
new Activation(new ReLU()),
new Dropout(0.2),
new Dense(100),
new Activation(new LeakyReLU()),
new Activation(new ReLU()),
new Dropout(0.2),
], 256, new Adam(0.0001))),
new Filesystem('mnist.rbx', true)
Expand Down
4 changes: 2 additions & 2 deletions composer.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"license": "MIT",
"keywords": [
"classification", "cross validation", "dataset", "data science", "dropout", "example project",
"feed forward", "image recognition", "image classification", "leaky relu", "machine learning",
"feed forward", "image recognition", "image classification", "relu", "machine learning",
"ml", "mnist", "multilayer perceptron", "neural network", "php", "php ml", "relu", "rubix ml",
"rubixml", "tutorial"
],
Expand All @@ -20,7 +20,7 @@
"require": {
"php": ">=7.4",
"ext-gd": "*",
"rubix/ml": "^1.0"
"rubix/ml": "^2.0"
},
"scripts": {
"train": "@php train.php",
Expand Down
17 changes: 9 additions & 8 deletions train.php
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
use Rubix\ML\NeuralNet\Layers\Dense;
use Rubix\ML\NeuralNet\Layers\Dropout;
use Rubix\ML\NeuralNet\Layers\Activation;
use Rubix\ML\NeuralNet\ActivationFunctions\LeakyReLU;
use Rubix\ML\NeuralNet\ActivationFunctions\ReLU;
use Rubix\ML\NeuralNet\Optimizers\Adam;
use Rubix\ML\Persisters\Filesystem;
use Rubix\ML\Extractors\CSV;
Expand All @@ -30,7 +30,8 @@
foreach (glob("training/$label/*.png") as $file) {
$samples[] = [imagecreatefrompng($file)];
$labels[] = "#$label";
}
}y

}

$dataset = new Labeled($samples, $labels);
Expand All @@ -41,14 +42,14 @@
new ImageVectorizer(true),
new ZScaleStandardizer(),
], new MultilayerPerceptron([
new Dense(100),
new Activation(new LeakyReLU()),
new Dense(128),
new Activation(new ReLU()),
new Dropout(0.2),
new Dense(100),
new Activation(new LeakyReLU()),
new Dense(128),
new Activation(new ReLU()),
new Dropout(0.2),
new Dense(100),
new Activation(new LeakyReLU()),
new Dense(128),
new Activation(new ReLU()),
new Dropout(0.2),
], 256, new Adam(0.0001))),
new Filesystem('mnist.rbx', true)
Expand Down

0 comments on commit e63cda5

Please sign in to comment.