machine learning
sandwhich is a mobile app built to solve the sandwich debate by determining if the picture you’ve taken is a sandwich using machine learning, flutter, and tensorflow lite.
available on google play and the app store
getting started machine learning
building from source machine learning
download the repository
motionmobs/sandwhich
cd sandwhich
run sandwhich
flutter run
train your model/ how to �� your ��
to train your ��, you need to install docker and imagemagick.
brew update && brew install imagemagick
set up file structure for your images
training_images
│ ├── input
│ │ ├── not-sandwich
│ │ │ ├── processed # processed not-sandwich images
│ │ │ └── unprocessed # original, not yet processed not-sandwich images
│ │ ├── sandwich
│ │ │ ├── processed # processed sandwich imagess
│ │ │ └── unprocessed # original, not yet processed sandwich images
│ ├── output # ready for model
│ │ ├── not-sandwich # minimum 20 images to train
│ │ └── sandwich # minimum 20 images to train
└──
once original images are loaded into the folders, as described above, process the images
cd training_images
./conversion.sh
after all images have been processed we need to build the docker image, run the script to retrain the model, and run the script to convert the retrained model to a .tflite
model we can use in the app.
go back to the root of the project and run:
cd -
./train.sh
prepare to wait a while.
testing
to test the new model, run ./test_model.sh $image_path
where $image_path
is any path to an image you want to test with. for convenience, you can even drag an image into the terminal to have its path placed at the end of whatever is currently in the terminal.
- when ready to use the new model, move it to the assets folder and overwrite the old model. backup old model if desired.
- run the app, find some sandwiches, and enjoy!
to release on android:
flutter build apk --target-platform android-arm64
Comments are closed.