
- FLASK PDF SEARCH HOW TO
- FLASK PDF SEARCH CODE
To learn how to ship your own deep learning models to production using Keras, Redis, Flask, and Apache, just keep reading.
In the final part of this series, I’ll show you how to resolve these server threading issues, further scale our method, provide benchmarks, and demonstrate how to efficiently scale deep learning in production using Keras, Redis, Flask, and Apache.Īs the results of our stress test will demonstrate, our single GPU machine can easily handle 500 concurrent requests (0.05 second delay in between each one) without ever breaking a sweat - this performance continues to scale as well. In part two we demonstrated how to leverage Redis along with message queueing/message brokering paradigms to efficiently batch process incoming inference requests (but with a small caveat on server threading that could cause problems). This method is a perfect fit if this is your first time building a deep learning web server or if you’re working on a home/hobby project. Part one ( which was posted on the official Keras.io blog!) is a simple Keras + deep learning REST API which is intended for single threaded use with no concurrent requests. Today’s post is the final chapter in our three part series on building a deep learning model server REST API: How would you go about shipping your deep learning models to production in these situations, and perhaps most importantly, making it scalable at the same time? A startup that is in “stealth mode” and needs to stress test their service/application in-house. A government organization that needs a private cloud. A project that specifies that the entire infrastructure must reside within the company. An in-house project where you cannot move sensitive data outside your network. This type of situation is more common than you may think. Going with a model deployment service is perfectly fine and acceptable… but what if you wanted to own the entire process and not rely on external services? nearly all of them provide some method to ship your machine learning/deep learning models to production in the cloud. If you don’t believe me, take a second and look at the “tech giants” such as Amazon, Google, Microsoft, etc. Shipping deep learning models to production is a non-trivial task. FLASK PDF SEARCH CODE
Click here to download the source code to this post