Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

Sorry, you do not have permission to ask a question, You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please type your username.

Please type your E-Mail.

Please choose an appropriate title for the post.

Please choose the appropriate section so your post can be easily searched.

Please choose suitable Keywords Ex: post, video.

Browse

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise Logo Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise Logo

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise Navigation

  • Home
  • About Us
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • About Us
  • Contact Us
Home/ Questions/Q 6166

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise Latest Questions

Author
  • 60k
Author
Asked: November 27, 20242024-11-27T04:00:10+00:00 2024-11-27T04:00:10+00:00

How to deploy a ML model as an API

  • 60k

Hello πŸ‘‹, this is a quick guide into deploying a ml model as an API, so lets get started.

Intro

First of the model we are going to be making use of is a Deepfake model by the name of First Order Motion. Deepfakes you allow you create an artificial version of a person saying or doing an action, I first found about this particular model on two minute papers (an awesome YT channel for lovers of AI ⚑) and wanted to try it for myself. The video below talks more about the model.

In this article we will be taking this model which could only be tested using the jupyter file in the repo and through the power of python and cloud computing we can access it as an API.

Tools used

  • Python and Flask: to make the API.
  • Docker: to build the docker image of the API
  • Google Account with Billing and Compute Engine enabled: To Create the VM instance where the container will be deployed.

Details

Step 1

Make the app.py

This is the main file of the project, it is where the APIs default and post requests are defined. The route for the home page can be easily defined by routing to “/” and defining a landing html file as can be shown below:

@app.route("/") def homepage():     return render_template("index.html", title="JUST WORK") 
Enter fullscreen mode Exit fullscreen mode

Next is defining the post request, which does the work of making the specified image mirror the specified video. The function to make this request is based on the inference of the model as seen in this colab notebook

image.png

This inference was then given the route “/post” and the appropriate headers as seen below:

@app.route('/post', methods=['GET', 'POST']) @cross_origin(origin='*',headers=['Content-Type','Authorization']) def post(): 
Enter fullscreen mode Exit fullscreen mode

The post function has some alterations and tweaks from the original used in the notebook due handling the inputs and their processing after which it was basically the exact same use of the functions of loading the model checkpoints and making the deepfake;

generator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml', checkpoint_path='../vox-cpk.pth.tar', cpu=True) # for gpu specify as false print("generator done")  predictions = make_animation(source_image=image,driving_video=driving_video, generator=generator, kp_detector=kp_detector, relative=True, cpu=True                                                   ) # for gpu specify as false  imageio.mimsave('generatedVideo.mp4', [img_as_ubyte(frame) for frame in predictions], fps=fps)     
Enter fullscreen mode Exit fullscreen mode

The full code can be found here

If you followed the above, you can test it locally and see some nice results, here's an example test script & the result

test script ⬇

import requests resp = requests.post("http://localhost:5000/post",                      files={"image": open('02.png','rb'), # 94kb file                              "video": open('test.mp4','rb') # 10secs vid                              }) ## output generation took 03m:03s on cpu (AMD ryzen 7 4800HS) ## for best result use a gpu 
Enter fullscreen mode Exit fullscreen mode

result ⬇

Step 2

Make the Dockerfile

There wasn't much to change from the dockerfile provided in the repo except for some additions as seen below:

FROM nvcr.io/nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04  RUN DEBIAN_FRONTEND=noninteractive apt-get -qq update   && DEBIAN_FRONTEND=noninteractive apt-get -qqy install python3-pip ffmpeg git less nano libsm6 libxext6 libxrender-dev   && rm -rf /var/lib/apt/lists/*  COPY . /app/ WORKDIR /app  RUN pip3 install --upgrade pip RUN pip3 install    https://download.pytorch.org/whl/cu100/torch-1.0.0-cp36-cp36m-linux_x86_64.whl    git+https://github.com/1adrianb/face-alignment    -r requirements.txt  ENTRYPOINT [ "python3" ]  CMD [ "app.py" ] 
Enter fullscreen mode Exit fullscreen mode

### requirements.txt imageio imageio-ffmpeg matplotlib numpy pandas python-dateutil pytz PyYAML scikit-image scikit-learn scipy torch torchvision tqdm IPython flask flask_cors requests 
Enter fullscreen mode Exit fullscreen mode

NB: The best thing to do would be to edit the dockerfile and requirements.txt(can be seen above) then add the app.py to a forked version of this repo so the container image can be built successfully

Step 3

Deploy to google cloud platform as a VM instance on compute engine

So first you need to have a google account, then if this your first time using the google cloud platform you would get $300 worth of cloud credit which comes in handy for this and any other projects later on. Let's get started:

  • Create a project on GCP(google cloud platform) eg: “photo-mirrors-video”

image.png

  • Open your cloud shell editor

image.png

  • In the cloud shell terminal copy the below to enter your current project. The project id in this case is “photo-mirrors-video”
gcloud config set project [PROJECT_ID] 
Enter fullscreen mode Exit fullscreen mode

image.png

  • Upload a folder containing your version of this project. The uploaded folder should have a structure similar to this

image.png

  • Make sure you followed up till this point and enter this command in the terminal:
gcloud builds submit --tag gcr.io/[PROJECT_ID]/chosen-image-name 
Enter fullscreen mode Exit fullscreen mode

  • Once the container has finished building it will be deployed to your google container registry

  • Go back to the cloud console dashboard and navigate to the compute engine option and select vm instance. Once opened click create instance

image.png

  • Under machine configuration have a minimum of v8CPUs to run the container( A GPU would have been ideal but the model was built with torch 1.0 so there are compatibility issues for the available configurations)

  • Check the container and specify the address of your container. (also check all the boxes under advanced)

image.png

  • Specify 30gb as the size of the container.

  • Allow http traffic for the firewall.

image.png

  • Give it some minutes and your API should be live.

For the sake of this example go and edit this firewall http rule to allow all ports to access the IP.

image.png

  • You can go to the external IP and add port 5000 which will take you to the index and should display this:

image.png

Conclusion

If you've followed up to this point, you successfully made an ml model an API, Congratulations πŸ‘πŸ‘. Thanks for sticking with me so far so and stay tuned for more how-to posts. Been a pleasure sharing what I've learnt this week πŸ‘‹

giph

cloudpythontutorialwebdev
  • 0 0 Answers
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

Sidebar

Ask A Question

Stats

  • Questions 4k
  • Answers 0
  • Best Answers 0
  • Users 1k
  • Popular
  • Answers
  • Author

    How to ensure that all the routes on my Symfony ...

    • 0 Answers
  • Author

    Insights into Forms in Flask

    • 0 Answers
  • Author

    Kick Start Your Next Project With Holo Theme

    • 0 Answers

Top Members

Samantha Carter

Samantha Carter

  • 0 Questions
  • 20 Points
Begginer
Ella Lewis

Ella Lewis

  • 0 Questions
  • 20 Points
Begginer
Isaac Anderson

Isaac Anderson

  • 0 Questions
  • 20 Points
Begginer

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help

Footer

Querify Question Shop: Explore Expert Solutions and Unique Q&A Merchandise

Querify Question Shop: Explore, ask, and connect. Join our vibrant Q&A community today!

About Us

  • About Us
  • Contact Us
  • All Users

Legal Stuff

  • Terms of Use
  • Privacy Policy
  • Cookie Policy

Help

  • Knowledge Base
  • Support

Follow

© 2022 Querify Question. All Rights Reserved

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.